Tag Archives: data replication
When the average person hears of cloning, my bet is that they think of the controversy and ethical issues surrounding cloning, such as the cloning of Dolly the sheep, or the possible cloning of humans by a mad geneticist in a rogue nation state. I would also put money down that when an Informatica blog reader thinks of cloning they think of “The Matrix” or “Star Wars” (that dreadful episode II Attack of the Clones). I did. Unfortunately.
But my pragmatic expectation is that when Informatica customers think of cloning, they also think of Data Cloning software. Data Cloning software clones terabytes of database data into a host of other databases, data warehouses, analytical appliances, and Big Data stores such as Hadoop. And just for hoots and hollers, you should know that almost half of all Data Integration efforts involve replication, be it snapshot or real-time, according to TDWI survey data. Survey also says… replication is the second most popular — or second most used — data integration tool, behind ETL.
Do your company’s cloning tools work with non-standard types? Know that Informatica cloning tools can reproduce Oracle data to just about anything on 2 tuples (or more). We do non-discriminatory duplication, so it’s no wonder we especially fancy cloning the Oracle! (a thousand apologies for the bad “Matrix” pun)
Just remember that data clones are an important and natural component of business continuity, and the use cases span both operational and analytic applications. So if you’re not cloning your Oracle data safely and securely with the quality results that you need and deserve, it’s high time that you get some better tools.
Send in the Clones
With that in mind, if you haven’t tried to clone before, for a limited time, Informatica is making Fast Clone database cloning trial software product available for a free download. Click here to get it now.
Informatica was listed as a leader in the industry’s first Gartner Magic Quadrant for Data Masking Technology. Finally, the data masking market gets a main stage role in one of the fastest growing enterprise software markets – data security. With the incredible explosion of data and the resulting number of places our personal information exists in the cybersphere, this confirmation is desperately needed as we enter into 2013. (more…)
Right before Christmas, I was delighted to read about the proposed merger between the New York Stock Exchange and Intercontinental Exchange. ICE and NYSE have been customers that we on the Informatica Ultra Messaging team have been working with for several years. NYSE Technologies leveraged our high performance messaging as part of their direct feeds market data solution that lowered latencies across dozens of Wall Street firms around the globe. (more…)
OppenheimerFunds Dreamforce Story: Lay a Foundation of Trusted and Complete Customer Information for Salesforce
Imagine you are rolling Salesforce out to more than 500 users today. What will be their first impression? Will they be annoyed when they encounter duplicate customer records during their first experience? Will they complain when they need to access other systems to get all the relevant customer information they need to do their job? How will that impact your goals for Salesforce adoption? (more…)
Thousands of Oracle OpenWorld 2012 attendees visited the Informatica booth to learn how to leverage their combined investments in Oracle and Informatica technology. Informatica delivered over 40 presentations on topics that ranged from cloud, to data security to smart partitioning. Key Informatica executives and experts, from product engineering and product management, spoke with hundreds of users on topics and answered questions on how Informatica can help them improve Oracle application performance, lower risk and costs, and reduce project timelines. (more…)
There’s no denying that business continues to accelerate its pace, and that the luxury of using historical data for business intelligence (BI) and planning is quickly becoming a thing of the past. Today, businesses need immediate insight into rapidly changing data in order to survive and thrive. Data that’s even a few hours old—let alone a few days old—is largely useless. But most current information architectures today still only provide data that is a day, a week, or sometimes as much as a month old.
This leaves most BI, reporting, and analytics systems to operate without up-to-date data from operational systems, data that’s fundamental to making informed decisions about the business. To operate at the speed of business, it’s imperative that executives and decision makers have ready access to fresh information at all times, delivered continuously and automatically without impact on operational systems.
More and more organizations have found the answer in data replication. Data replication allows you to work with and make the best business decisions based on the freshest data drawn from all your operational systems. It delivers this current, up-to-date data in a seamless and non-intrusive manner, empowering you to operate at the speed of your business without constraints. It also automatically delivers this data wherever it’s needed – for operational intelligence, as well as operational use – without direct impact.
This unique “data-on-demand” approach removes the constraints of stale, old information and enables powerful outcomes for business initiatives. Fresh, current data drives new thinking across the enterprise, and can help organizations to:
- Increase revenue, delight customers, and outshine the competition
- Improve the quality and efficiency of business decisions
- Standardize on a single reliable and scalable solution that lowers costs and removes complexity
At Informatica, we’ve seen numerous customers implement Informatica Data Replication to deliver this fresh, up-to-date data for operational intelligence, reporting, and analytics, and report tremendous positive changes to their business. Some examples of customers using Informatica Data Replication with great success are:
- Westlake Financial Systems saved hundreds of thousands of dollars and improved its profitability and customer satisfaction through a more effective payment collection system
- Optus Australia increased both revenues and customer satisfaction by providing calling plan access, alerting, and self-service upgrades directly to its customer base
- A major national pharmacy chain accelerated and improved health care decision making across the business and increased agility and responsiveness to its customers, resulting in higher customer satisfaction while driving down the cost of technology
Is your business ready to make the leap to true operational intelligence using the freshest data to make your business decisions? Do you want to understand more about the impact that this kind of insight can make to your business?
If yes, please join us for a discussion with two business executives who have seen the impact in their own and their customers’ businesses using data replication on August 28 at 10 am Pacific. You can register using the link below.
The freshest data does make the best business decisions. We look forward to your joining and participating in the discussion.
This week the EMC World 2012 conference is taking place in Las Vegas. Informatica is participating as a partner continuing its commitment to the EMC Select Partnership for the Informatica ILM and MDM solutions. Informatica has continued to expand its partnership to include support for its Greenplum Hadoop distribution – mostly to support organizations needs for big data integration while making big data manageable and secure. (more…)
Treating Big Data Performance Woes with the Data Replication Cure Blog Series – Part 3
OK, so in our last two discussions, we looked at the memory bottleneck and how even in high performance environments, there are still going to be situations in which streaming the data to where it needs to be will introduce latencies that throttle processing speed. And I also noted that we needed some additional strategies to address that potential bottleneck, so here goes: (more…)
There were two recent events that inspired me to write this blog entry. The first was an Informatica Users’ Group meeting where I was invited to speak about Informatica’s new offerings in the data replication space. Like many of my presentations I like to begin by asking the audience to share their exposure to replication technologies, how they are using it, how it is working for them, etc. After quizzing this particular audience I was astounded by the number of customers that were writing their own extraction routines to pull data from various data sources, it was over 80% of the audience. I started to ponder this concept as I delivered my presentation and tried to point out specific areas where building might not be as effective as buying.
The second event was the Saturday after the Users’ Group meeting. I had taken the family to Disneyland and my daughter wanted to visit Build-a-Bear. Now I ask you, how can any doting father refuse a 10-year old’s plea for the “The most special bear in the whole world, I’ll name him Daddy Bear”. Yeah I know, I should have “Sucker” plastered on my forehead. However, as I was going through the process of building this special bear with my daughter, I started to consider correlations on how this might pertain to building a replication technology verses buying one. The amount of staff is what first caught my attention, someone to help pick through the inventory of choices, someone to help pick a heart, a customized sound, a stuffing station, there was someone to assist in picking out accessories and clothes, I mean after all you can’t have a naked bear. Of course no bear is complete without being a member of the “hug club”. After all of this specialization was complete I ended up with a great memory and a bill in excess of $90.00.
I started to consider the issues the customer group faced when attempting to build their own extraction routines. Most had a variety of database sources, Oracle, DB2, MS SQL, etc. Each required a different person with different skill sets to write the extraction routines. Given the resources this could vary from database to database and even department to department. I’m thinking the hidden cost of staffing this exercise is probably overlooked by upper management.
There also didn’t seem to be a common approach to how the extraction process is maintained. After further analysis most elected to extract the data was through Triggers or using SQL Select routines. OUCH, that is a pretty intrusive approach to pulling the data out of any source environment. I’m thinking a membership to the “hug club” might be in order once the overhead requirements are analyzed. But there is a distinct reason for this choice; it is straight forward and easier to troubleshoot.
Why? Because Build-a-Bear can quickly train new staff members on how to work each station, but specialized IT personal are harder to come by, and they come and go from organizations all the time. Writing a customized routine for extracting data might provide job security, but it might also paralyze an organization if errors are encountered after an upgrade, or change to the environment. Delays can be exacerbated if the author of the code has moved on and no longer works for the organization.
This topic has intrigued me and before I closed the presentation I inquired if anyone would be willing to participate in an ROI study to validate whether building verses buying would make sense in their organization. I had several willing candidates. Over the next several months I invite you to follow along with my blog series on this subject. I intend to document my findings and share it with a wider audience that might be considering an investment in a replication technology verses building your own.
For those of you who will be attending Informatica World, I’d like to invite you to join me at the Demo Booths and Hands on Labs. I’ll be there all week and would love the opportunity to meet with you in person.
Treating Big Data Performance Woes with the Data Replication Cure Blog Series – Part 1
“Big Data” is all the rage – it is virtually impossible to check out any information management media channel, online resource, or community of interest without having your eyeballs bathed in articles touting the benefits and inevitability of what has come to be known as big data. I have watched this transformation over the past few years as data warehousing and business analytics appliances have entered the mainstream. Pure and simple: what was the bleeding edge of technology twenty years ago in performance computing is now commonplace, with Hadoop being the primary platform (or more accurately, programming environment) for developing big data analytics applications. (more…)