Tag Archives: customer data integration
When I was seven years old, Danny Weiss had a birthday party where we played the telephone game. The idea is this: there are 8 people sitting around a table, the first person tells the next person a little story. They tell the next person, the story, and so on, all the way around the room. At the end of the game, you compare the original story that the first person tells and compare it to the story the 8th person tells. Of course, the stories are very different and everyone giggles hysterically… we were seven years old after all.
The reason I was thinking about this story is that data integration development is similarly inefficient as a seven year old birthday party. The typical process is that a business analyst, using the knowledge in their head about the business applications they are responsible for, creates a spreadsheet in Microsoft Excel that has a list of database tables and columns along with a set of business rules for how the data is to be transformed as it moved to a target system (a data warehouse or another application). The spreadsheet, which is never checked against real data, is then passed to a developer who then creates code in separate system in order to move the data, which is then checked by a QA person which is then checked again by the business analyst at the end of the process. This is the first time the business analyst verifies their specification against real data.
99 times out of 100, the data in the target system doesn’t match what the business analyst was expecting. Why? Either the original specification was wrong because the business analyst had a typo or the data is inaccurate. Or the data in the original system wasn’t organized the way the analyst thought it was organized. Or the developer misinterpreted the spreadsheet. Or the business analyst simply doesn’t need this data anymore – he needs some other data. The result is lots of errors, just like the telephone game. And the only way to fix it is with rework and then more rework.
But there is a better way. What if the data analyst could validate their specification against real data and self correct on the fly before passing the specification to the developer. What if the specification were not just a specification, but a prototype that could be passed directly to the developer who wouldn’t recode it, but would just modify it to add scalability and reliability? The result is much less rework and much faster time to development. In fact, up to 5 times faster.
That is what Agile Data integration is all about. Rapid prototyping and self-validation against real data up front by the business analyst. Sharing of results in a common toolset back and forth to the developer to improve the accuracy of communication.
Because we believe the agile process is so important to your success, Informatica is giving all of our PowerCenter Standard Edition (and higher editions) customers agile data integration for FREE!!! That’s right, if you are a current customer of Informatica PowerCenter, we are giving you the tools you need to go from the old fashion error-prone, waterfall, telephone game style of development to a modern 21st century Agile process.
• FREE rapid prototyping and data profiling for the data analyst.
• Go from prototype to production with no recoding.
• Better communication and better collaboration between analyst and developer
PowerCenter 9.6. Agile Data Integration built in. No more telephone game. It doesn’t get any better than that.
In a previous blog post, I wrote about when business “history” is reported via Business Intelligence (BI) systems, it’s usually too late to make a real difference. In this post, I’m going to talk about how business history becomes much more useful when combined operationally and in real time.
E. P. Thompson, a historian pointed out that all history is the history of unintended consequences. His idea / theory was that history is not always recorded in documents, but instead is ultimately derived from examining cultural meanings as well as the structures of society through hermeneutics (interpretation of texts) semiotics and in many forms and signs of the times, and concludes that history is created by people’s subjectivity and therefore is ultimately represented as they REALLY live.
The same can be extrapolated for businesses. However, the BI systems of today only capture a miniscule piece of the larger pie of knowledge representation that may be gained from things like meetings, videos, sales calls, anecdotal win / loss reports, shadow IT projects, 10Ks and Qs, even company blog posts – the point is; how can you better capture the essence of meaning and perhaps importance out of the everyday non-database events taking place in your company and its activities – in other words, how it REALLY operates.
One of the keys to figuring out how businesses really operate is identifying and utilizing those undocumented RULES that are usually underlying every business. Select company employees, often veterans, know these rules intuitively. If you watch them, and every company has them, they just have a knack for getting projects pushed through the system, or making customers happy, or diagnosing a problem in a short time and with little fanfare. They just know how things work and what needs to be done.
These rules have been, and still are difficult to quantify and apply or “Data-ify” if you will. Certain companies (and hopefully Informatica) will end up being major players in the race to datify these non-traditional rules and events, in addition to helping companies make sense out of big data in a whole new way. But in daydreaming about it, it’s not hard to imagine business systems that will eventually be able to understand the optimization rules of a business, accounting for possible unintended scenarios or consequences, and then apply them in the time when they are most needed. Anyhow, that’s the goal of a new generation of Operational Intelligence systems.
In my final post on the subject, I’ll explain how it works and business problems it solves (in a nutshell). And if I’ve managed to pique your curiosity and you want to hear about Operational Intelligence sooner, tune in to to a webinar we’re having TODAY at 10 AM PST. Here’s the link.
Unlike some of my friends, History was a subject in high school and college that I truly enjoyed. I particularly appreciated biographies of favorite historical figures because it painted a human face and gave meaning and color to the past. I also vowed at that time to navigate my life and future under the principle attributed to Harvard professor Jorge Agustín Nicolás Ruiz de Santayana y Borrás that goes, “Those who cannot remember the past are condemned to repeat it.”
So that’s a little ditty regarding my history regarding history.
Forwarding now to the present in which I have carved out my career in technology, and in particular, enterprise software, I’m afforded a great platform where I talk to lots of IT and business leaders. When I do, I usually ask them, “How are you implementing advanced projects that help the business become more agile or effective or opportunistically proactive?” They usually answer something along the lines of “this is the age and renaissance of data science and analytics” and then end up talking exclusively about their meat and potatoes business intelligence software projects and how 300 reports now run their business.
Then when I probe and hear their answer more in depth, I am once again reminded of THE history quote and think to myself there’s an amusing irony at play here. When I think about the Business Intelligence systems of today, most are designed to “remember” and report on the historical past through large data warehouses of a gazillion transactions, along with basic, but numerous shipping and billing histories and maybe assorted support records.
But when it comes right down to it, business intelligence “history” is still just that. Nothing is really learned and applied right when and where it counted – AND when it would have made all the difference had the company been able to react in time.
So, in essence, by using standalone BI systems as they are designed today, companies are indeed condemned to repeat what they have already learned because they are too late – so the same mistakes will be repeated again and again.
This means the challenge for BI is to reduce latency, measure the pertinent data / sensors / events, and get scalable – extremely scalable and flexible enough to handle the volume and variety of the forthcoming data onslaught.
There’s a part 2 to this story so keep an eye out for my next blog post History Repeats Itself (Part 2)
Everyone knows that Informatica is the Data Integration company that helps organizations connect their disparate software into a cohesive and synchronous enterprise information system. The value to business is enormous and well documented in the form of use cases, ROI studies and loyalty / renewal rates that are industry-leading.
Event Processing, on the other hand is a technology that has been around only for a few years now and has yet to reach Main Street in Systems City, IT. But if you look at how event processing is being used, it’s amazing that more people haven’t heard about it. The idea at its core (pun intended) is very simple – monitor your data / events – those things that happen on a daily, hourly, minute-ly basis and then look for important patterns that are positive or negative indicators, and then set up your systems to automatically take action when those patterns come up – like notify a sales rep when a pattern indicates a customer is ready to buy, or stop that transaction, your company is about to be defrauded.
Since this is an Informatica blog, then you probably have a decent set of “muscles” in place already and so why, you ask, would you need 6 pack abs? Because 6 packs abs are a good indication of a strong musculature core and are the basis of a stable and highly athletic body. It’s the same parallel for companies because in today’s competitive business environment, you need strength, stability, and agility to compete. And since IT systems increasingly ARE the business, if your company isn’t performing as strong, lean, and mean as possible, then you can be sure your competitors will be looking to implement every advantage they can.
You may also be thinking why would you need something like Event Processing when you already have good Business Intelligence systems in place? The reality is that it’s not easy to monitor and measure useful but sometimes hidden data /event / sensor / social media sources and also to discern which patterns have meaning and which patterns may be discovered as false negatives. But the real difference is that BI usually reports to you after the fact when the value of acting on the situation has diminished significantly.
So while muscles are important to be able to stand up and run, and good quality, strong muscles are necessary to do heavy lifting, it’s those 6 pack abs on top of it all that give you the mean lean fighting machine to identify significant threats and opportunities amongst your data, and in essence, to better compete and win.
Customers don’t always like change, and new product launch offers variety of changes so it’s important to showcase the value of the change for customers while launching a product. One key ingredient that can fuel the successful Product launch is leveraging the rich, varied, multi-sourced, readily available information. Yes, tons of information which is like a gold mine and is available to us more easily/readily than ever before from various different sources. Industry experts call it Big Data. Today Big Data can pull gold out of this information gold mine and positively impact a product launch. What follows are 3 secrets of how Product Marketers can tap the power of Big Data for a successful product launch.
Secret #1: Use Big Data to optimize content strategy and targeted messaging
The main challenge is not just to create a great product but also to communicate the clear compelling value of the product to your customers. You need to speak the language that resonates with needs and preferences of customers. Through social media platforms and weblogs, lots of information is available highlighting views/preferences of buyers. Big Data brings all these data points together from various sources, unlocks them to provide customer intelligence. Product Marketers can leverage this intelligence to create customer segmentation and targeted messaging.
Secret #2: Use Big Data to identify influential customers and incent them to influence others
One of the studies done by Forrester Research indicates that today your most valuable customer is the one who may buy little but influences 100 others to buy via blogs, tweets, Facebook and online product reviews. Using MDM with Big Data businesses can create a 360 degree customer profile by integrating transaction, social interaction and weblogs which help in identifying influential customers. Companies can engage these influential customers early by initiating a soft launch or beta testing of their product.
Secret #3: Use Big data to provide direction to ongoing Product improvement
Big Data is also a useful tool to monitor on-going product performance and keeping customers engaged post-launch. Insights into how customers are using the product and what they enjoy most can open the doors for improvements in future launches resulting in happier and loyal customers.
Zynga, creator of most popular Facebook game Farmville, collects terabytes of big data in a day and analyzes it to improve the game features and customer services. As indicated in a WSJ article after Version 1 launch of the game, the company analyzed customer behavior and found that customers were interacting with animals much more than the designers expected. So in the second release game designers increased the game offerings with more focus on animals keeping customer’s more engaged.
Big data is proving to be a game changer for product managers and marketers who want to deeply engage with their customers and launch products with a memorable and valued customer experience.
The phrase ‘Data Tsunami’ has been used by numerous authors in the last few months and it’s difficult to find another suitable analogy because what’s approaching is of such an increased order of magnitude that the IT industries continued expectations for data growth will be swamped in the next few years.
However impressive a spectacle a Tsunami is, it still wreaks havoc to those who are unprepared or believe they can tread water and simply float to the surface when the trouble has passed.
I have been developing two ideas for customer data management: entities vs. roles and differentiation. In the last post I suggested that customer is not a data type, but rather a role that can be played by some core entity in some context, with some set of characteristics assigned to that role within that context. (more…)
In a number of recent tutorials and training sessions, I have incorporated a little joke into some of the material to help motivate understanding. It isn’t really *that much* of a joke, but here it is:
Q: What is the most dangerous question to ask data professionals?