Tag Archives: data

Go On, Flip Your Division of Labor: More Time Analyzing and Less Time Prepping Data

Are you in Sales Operations, Marketing Operations, Sales Representative/Manager, or Marketing Professional? It’s no secret that if you are, you benefit greatly from the power of performing your own analysis, at your own rapid pace. When you have a hunch, you can easily test it out by visually analyzing data in Tableau without involving IT. When you are faced with tight timeframes in which to gain business insight from data, being able to do it yourself in the time you have available and without technical roadblocks makes all the difference.

Self-service Business Intelligence is powerful!  However, we all know it can be even more powerful. When needing to put together an analysis, we know that you spend about 80% of your time putting together data, and then just 20% of your time analyzing data to test out your hunch or gain your business insight. You don’t need to accept this anymore. We want you to know that there is a better way!

We want to allow you to Flip Your Division of Labor and allow you to spend more than 80% of your time analyzing data to test out your hunch or gain your business insight and less than 20% of your time putting together data for your Tableau analysis! That’s right. You like it. No, you love it. No, you are ready to run laps around your chair in sheer joy!! And you should feel this way. You now can spend more time on the higher value activity of gaining business insight from the data, and even find copious time to spend with your family. How’s that?

Project Springbok is a visionary new product designed by Informatica with the goal of making data access and data quality obstacles a thing of the past.  Springbok is meant for the Tableau user, a data person would rather spend their time visually exploring information and finding insight than struggling with complex calculations or waiting for IT. Project Springbok allows you to put together your data, rapidly, for subsequent analysis in Tableau. Project Springbok tells you things about your data that even you may not have known. It does it through Intelligent Suggestions that it presents to the User.

Let’s take a quick tour:

  • Project Springbok tells you, that you have a date column and that you likely want to obtain the Year and Quarter for your analysis (Fig 1)., And if you so wish, by a single click, voila, you have your corresponding years and even the quarters. And it all happened in mere seconds. A far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

data

                                                                      Fig. 1

VALUE TO A MARKETING CAMPAIGN PROFESSIONAL: Rapidly validate and accurately complete your segmentation list, before you analyze your segments in Tableau. Base your segments on trusted data that did not take you days to validate and enrich.

  • Then Project Springbok will tell you that you have two datasets that could be joined on a common key, email for example, in each dataset, and would you like to move forward and join the datasets (Fig 2)? If you agree with Project Springbok’s suggestion, voila, dataset joined in a mere few seconds. Again, a far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

Data

  Fig. 2

VALUE TO A SALES REPRESENTATIVE OR SALES MANAGER: You can now access your Salesforce.com data (Fig 3) and effortlessly combine it with ERP data to understand your true quota attainment. Never miss quota again due to a revenue split, be it territory or otherwise. Best of all, keep your attainment datatset refreshed and even know exactly what datapoint changed when your true attainment changes.

Data

Fig. 3

  • Then, if you want, Project Springbok will tell you that you have emails in the dataset, which you may or may not have known, but more importantly it will ask you if you wish to determine which emails can actually be mailed to. If you proceed, not only will Springbok check each email for correct structure (Fig 4), but will very soon determine if the email is indeed active, and one you can expect a response from. How long would that have taken you to do?

VALUE TO A TELESALES REPRESENTATIVE OR MARKETING EMAIL CAMPAIGN SPECIALIST : Ever thought you had a great email list and then found out most emails bounced? Now, confidently determine which emails are truly ones will be able to email to, before you send the message. Email prospects who you know are actually at the company and be confident you have their correct email addresses. You can then easily push the dataset into Tableau to analyze the trends in email list health.

Data

Fig. 4

 And, in case you were wondering, there is no training or install required for Project Springbok. The 80% of your time you used to spend on data preparation is now shrunk considerably, and this is after using only a few of Springbok’s capabilities. One more thing: You can even directly export from Project Springbok into Tableau via the “Export to Tableau TDE” menu item (Fig 5).  Project Springbok creates a Tableau TDE file and you just double click on it to open Tableau to test out your hunch or gain your business insight.

Data

Fig. 5

Here are some other things you should know, to convince you that you, too, can only spend no more than 20% of you time on putting together data for your subsequent Tableau analysis:

  • Springbok Sign-Up is Free
  • Springbok automatically finds problems with your data, and lets you fix them with a single click
  • Springbok suggests useful ways for you to combine different datasets, and lets you combine them effortlessly
  • Springbok suggests useful summarizations of your data, and lets you follow through on the summarizations with a single click
  • Springbok allows you to access data from your cloud or on-premise systems with a few clicks, and the automatically keep it refreshed. It will even tell you what data changed from the last time you saw it
  • Springbok allows you to collaborate by sharing your prepared data with others
  • Springbok easily exports your prepared data directly into Tableau for immediate analysis. You do not have to tell Tableau how to interpret the prepared data
  • Springbok requires no training or installation

Go on. Shift your division of labor in the right direction, fast. Sign-Up for Springbok and stop wasting precious time on data preparation. http://bit.ly/TabBlogs

———-

Are you going to be at Dreamforce this week in San Francisco?  Interested in seeing Project Springbok working with Tableau in a live demonstration?  Visit the Informatica or Tableau booths and see the power of these two solutions working hand-in-hand.Informatica is Booth #N1216 and Booth #9 in the Analytics Zone. Tableau is located in Booth N2112.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, General | Tagged , , , | Leave a comment

The Pros and Cons: Data Integration from the Bottom-Up and the Top-Down

Data Integration from the Bottom-Up and the Top-Down

Data Integration from the Bottom-Up and the Top-Down

What are the first steps of a data integration project?  Most are at a loss.  There are several ways to approach data integration, and your approach depends largely upon the size and complexity of your problem domain.

With that said, the basic approaches to consider are from the top-down, or the bottom-up.  You can be successful with either approach.  However, there are certain efficiencies you’ll gain with a specific choice, and it could significantly reduce the risk and cost.  Let’s explore the pros and cons of each approach.

Top-Down

Approaching data integration from the top-down means moving from the high level integration flows, down to the data semantics.  Thus, you an approach, perhaps even a tool-set (using requirements), and then define the flows that are decomposed down to the raw data.

The advantages of this approach include:

The ability to spend time defining the higher levels of abstraction without being limited by the underlying integration details.  This typically means that those charged with designing the integration flows are more concerned with how they have to deal with the underlying source and target, and this approach means that they don’t have to deal with that issue until later, as they break down the flows.

The disadvantages of this approach include:

The data integration architect does not consider the specific needs of the source or target systems, in many instances, and thus some rework around the higher level flows may have to occur later.  That causes inefficiencies, and could add risk and cost to the final design and implementation.

Bottom-Up

For the most part, this is the approach that most choose for data integration.  Indeed, I use this approach about 75 percent of the time.  The process is to start from the native data in the sources and targets, and work your way up to the integration flows.  This typically means that those charged with designing the integration flows are more concerned with the underlying data semantic mediation than the flows.

The advantages of this approach include:

It’s typically a more natural and traditional way of approaching data integration.  Called “data-driven” integration design in many circles, this initially deals with the details, so by the time you get up to the integration flows there are few surprises, and there’s not much rework to be done.  It’s a bit less risky and less expensive, in most cases.

The disadvantages of this approach include:

Starting with the details means that you could get so involved in the details that you miss the larger picture, and the end state of your architecture appears to be poorly planned, when all is said and done.  Of course, that depends on the types of data integration problems you’re looking to solve.

No matter which approach you leverage, with some planning and some strategic thinking, you’ll be fine.  However, there are different paths to the same destination, and some paths are longer and less efficient than others.  As you pick an approach, learn as you go, and adjust as needed.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Aggregation, Data Integration | Tagged , , , | Leave a comment

Embracing the Hybrid IT World through Cloud Integration

Embracing the Hybrid IT World through Cloud Integration

Embracing Hybrid IT through Cloud Integration

Being here at Oracle Open World, it’s hard not to think about Oracle’s broad scope in enterprise software and the huge influence it wields over our daily work. But even as all-encompassing as Oracle has become, the emergence of the cloud is making us equally reliant on a whole new class of complementary applications and services. During the early era of on-premise apps, different lines of businesses (LOBs) selected the leading application for CRM, ERP, HCM, and so on. In the cloud, it feels like we have come full circle to the point where best of breed cloud applications have been deployed across the enterprise, with the exception that the data models, services and operations are not under our direct control. As a result, Hybrid IT, and the ability to integrate major on-premises applications such as Oracle E-Business, PeopleSoft, and Siebel, to name a few with cloud applications such as Oracle Cloud Applications, Salesforce, Workday, Marketo, SAP Cloud Applications, and Microsoft Cloud Apps, has become one of businesses’ greatest imperatives and challenges.

With Informatica Cloud, we’ve long tracked the growth of the various cloud apps and its adoption in the enterprise. Common business patterns – such as opportunity-to-order, employee onboarding, data migration and business intelligence – that once took place solely on-premises are now being conducted both in the cloud and on-premises.

The fact is that we are well on our way to a world where our business needs are best met by a mix of on-premises and cloud applications. Regardless of what we do or make, we can no longer get away with just on-premises applications – or at least not for long.  As we become more reliant on cloud services, such as those offered by Oracle, Salesforce, SAP, NetSuite, Workday, we are embracing the reality of a new hybrid world, and the imperative for simpler integration it demands.

So, as the ground shifts beneath us, moving us toward the hybrid world, we, as business and IT users, are left standing with a choice: Continue to seek solutions in our existing on-premises integration stacks, or go beyond, to find them with the newer and simpler cloud solution. Let us briefly look at five business patterns we’ve been tracking.

One of the first things we’ve noticed with the hybrid environment is the incredible frequency with which data is moved back and forth between the on-premises and cloud environments. We call this the data integration pattern, and it is best represented by getting data, such as price list or inventory from Oracle E-Business into a cloud app so that the actual user of the cloud app can view the most updated information. Here the data (usually master data) is copied toserves a certain purpose. Data Integration also involves the typical needs of data to be transformed before it can be inserted or updated. The understanding of metadata and data models of the involved applications is key to do this effectively and repeatedly.

The second is the application integration pattern, or the real time transaction flow between your on-premises and cloud environment, where you have business processes and services that need to communicate with one another. Here, the data needs to be referenced in real time for a knowledge worker to take action.

The third, data warehousing in the cloud, is an emerging pattern that is gaining importance for both mid- and large-size companies. In this pattern, businesses are moving massive amounts of data in bulk from both on-premises and cloud sources into a cloud data warehouse, such as Amazon Redshift, for BI analysis.

The fourth, the Internet of Things (IOT) pattern, is also emerging and is becoming more important, especially as new technologies and products, such as Nest, enable us to push streaming data (sensor data, web logs, etc.) and combine them with other cloud and on-premises data sources into a cloud data store. Often the data is unstructured and hence it is critical for an integration platform to effectively deal with unstructured data.

The fifth and final pattern, API integration, is gaining prominence in the cloud. Here, an on-premise or cloud application exposes the data or service as an external API that can be consumed directly by applications or by a higher-level composite app in an orchestration.

While there are certainly different approaches to the challenges brought by Hybrid IT, cloud integration is often best-suited to solving them.

Here’s why.

First, while the integration problems are more or less similar to the on-premise world, the patterns now overlap between cloud and on-premise. Second, integration responsibility is now picked up at the edge, closer to the users, whom we call “citizen integrators”. Third, time to market and agility demands that any integration platform you work with can live up to your expectations of speed. There are no longer multiyear integration initiatives in the era of the cloud. Finally, the same values that made cloud application adoption attractive (such as time-to-value, manageability, low operational overhead) also apply to cloud integration.

One of the most important forces driving cloud adoption is the need for companies to put more power into hands of the business user.  These users often need to access data in other systems and they are quite comfortable going through the motions of doing so without actually being aware that they are performing integration. We call this class of users ‘Citizen Integrators’. For example, if a user uploads an excel file to Salesforce, it’s not something they would call as “integration”. It is an out-of-the-box action that is integrated with their user experience and is simple to use from a tooling point of view and oftentimes native within the application they are working with.

Cloud Integration Convergence is driving many integration use cases. The most common integration – such as employee onboarding – can span multiple integration patterns. It involves data integration, application integration and often data warehousing for business intelligence. If we agree that doing this in the cloud makes sense, the question is whether you need three different integration stacks in the cloud for each integration pattern. And even if you have three different stacks, what if an integration flow involves the comingling of multiple patterns? What we are noticing is a single Cloud Integration platform to address more and more of these use cases and also providing the tooling for both a Citizen Integrator as well as an experienced Integration Developer.

The bottom line is that in the new hybrid world we are seeing a convergence, where the industry is moving towards streamlined and lighter weight solutions that can handle multiple patterns with one platform.

The concept of Cloud Integration Convergence is an important one and we have built its imperatives into our products. With our cloud integration platform, we combine the ability to handle any integration pattern with an easy-to-use interface that empowers citizen integrators, and frees integration developers for more rigorous projects. And because we’re Informatica, we’ve designed it to work in tandem with PowerCenter, which means anything you’ve developed for PowerCenter can be leveraged for Informatica Cloud and vice versa thereby fulfilling Informatica’s promise of Map Once, Deploy Anywhere.

In closing, I invite you to visit us at the Informatica booth at Oracle Open World in booth #3512 in Moscone West. I’ll be there with some of my colleagues, and we would be happy to meet and talk with you about your experiences and challenges with the new Hybrid IT world.

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration | Tagged , , , , , , | Leave a comment

Scary Times For Data Security

Scary Times For Data Security

Scary Times For Data Security

These are scary times we live in when it comes to data security. And the times are even scarier for today’s retailers, government agencies, financial institutions, and healthcare organizations. The internet has become a battlefield. Criminals are looking to steal trade secrets and personal data for financial gain. Terrorists seek to steal data for political gain. Both are after your Personally Identifiable Information, like your name, account numbers, social security number, date of birth, ID’s and passwords.

How are they accomplishing this?  A new generation of hackers has learned to reverse engineer popular software programs (e.g. Windows, Outlook Java, etc.) in order to find so called “holes”. Once those holes are exploited, the hackers develop “bugs” that infiltrate computer systems, search for sensitive data and return it to the bad guys. These bugs are then sold in the black market to the highest bidder. When successful, these hackers can wreak havoc across the globe.

I recently read a Time Magazine article titled “World War Zero: How Hackers Fight to Steal Your Secrets.” The article discussed a new generation of software companies made up of former hackers. These firms help other software companies by identifying potential security holes, before they can be used in malicious exploits.

This constant battle between good (data and software security firms) and bad (smart, young, programmers looking to make a quick/big buck) is happening every day. Unfortunately, the average consumer (you and I) are the innocent victims of this crazy and costly war. As a consumer in today’s digital and data-centric age, I worry when I see these headlines of ongoing data breaches from the Targets of the world to my local bank down the street. I wonder not “if” but “when” I will become the next victim.  According to the Ponemon institute, the average cost to a company was $3.5 million in US dollars and 15 percent more than what it cost last year.

As a 20 year software industry veteran, I’ve worked with many firms across global financial services industry. As a result, my concerned about data security exceed those of the average consumer. Here are the reasons for this:

  1. Everything is Digital: I remember the days when ATM machines were introduced, eliminating the need to wait in long teller lines. Nowadays, most of what we do with our financial institutions is digital and online whether on our mobile devices to desktop browsers. As such every interaction and transaction is creating sensitive data that gets disbursed across tens, hundreds, sometimes thousands of databases and systems in these firms.
  2. The Big Data Phenomenon: I’m not talking about sexy next generation analytic applications that promise to provide the best answer to run your business. What I am talking about is the volume of data that is being generated and collected from the countless number of computer systems (on-premise and in the cloud) that run today’s global financial services industry.
  3. Increase use of Off-Shore and On-Shore Development: Outsourcing technology projects to offshore development firms has be leverage off shore development partners to offset their operational and technology costs. With new technology initiatives.

Now here is the hard part.  Given these trends and heightened threats, do the companies I do business with know where the data resides that they need to protect?  How do they actually protect sensitive data when using it to support new IT projects both in-house or by off-shore development partners?   You’d be amazed what the truth is. 

According to the recent Ponemon Institute study “State of Data Centric Security” that surveyed 1,587 Global IT and IT security practitioners in 16 countries:

  • Only 16 percent of the respondents believe they know where all sensitive structured data is located and a very small percentage (7 percent) know where unstructured data resides.
  • Fifty-seven percent of respondents say not knowing where the organization’s sensitive or confidential data is located keeps them up at night.
  • Only 19 percent say their organizations use centralized access control management and entitlements and 14 percent use file system and access audits.

Even worse, those surveyed said that not knowing where sensitive and confidential information resides is a serious threat and the percentage of respondents who believe it is a high priority in their organizations. Seventy-nine percent of respondents agree it is a significant security risk facing their organizations. But a much smaller percentage (51 percent) believes that securing and/or protecting data is a high priority in their organizations.

I don’t know about you but this is alarming and worrisome to me.  I think I am ready to reach out to my banker and my local retailer and let him know about my concerns and make sure they ask and communicate my concerns to the top of their organization. In today’s globally and socially connected world, news travels fast and given how hard it is to build trustful customer relationships, one would think every business from the local mall to Wall St should be asking if they are doing what they need to identify and protect their number one digital asset – Their data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Privacy, Data Quality, Data Services, Data Warehousing | Tagged , , , | Leave a comment

Marketing in a Data-Driven World… From Mad Men to Mad Scientist

Are Marketers More Mad Men or Mad Scientists?

I have been in marketing for over two decades. As I meet people in social situations, on airplanes, and on the sidelines at children’s soccer games, and they ask what it is I do, I get responses that constantly amuse me and lead me to the conclusion that the general public has absolutely no idea what a marketer does. I am often asked things like “have you created any commercials that I might have seen?” and peppered with questions that evoke visions of Mad Men-esque 1960’s style agency work and late night creative martini-filled pitch sessions.

I admit I do love to catch the occasional Mad Men episode, and a few weeks ago, I stumbled upon one that had me chuckling. You may remember the one that Don Draper is pitching a lipstick advertisement and after persuading the executive to see things his way, he says something along the lines of, “We’ll never know, will we? It’s not a science.”

How the times have changed. I would argue that in today’s data-driven world, marketing is no longer an art and is now squarely a science.

Sure, great marketers still understand their buyers at a gut level, but their hunches are no longer the impetus of a marketing campaign. Their hunches are now the impetus for a data-driven, fact-finding mission, and only after the analysis has been completed and confirms or contradicts this hunch, is the campaign designed and launched.

This is only possible because today, marketers have access to enormous amounts of data – not just the basic demographics of years past. Most marketers realize that there is great promise in all of that data, but it’s just too complicated, time-consuming, and costly to truly harness it. How can you really ever make sense of the hundreds of data sources and tens of thousands of variables within these sources? Social media, web analytics, geo-targeting, internal customer and financial systems, in house marketing automation systems, third party data augmentation in the cloud… the list goes on and on!

How can marketers harness the right data, in the right way, right away? The answer starts with making the commitment that your marketing team – and hopefully your organization as a whole – will think “data first”. In the coming weeks, I will focus on what exactly thinking data first means, and how it will pay dividends to marketers.

In the mean time, I will make the personal commitment to be more patient about answering the silly questions and comments about marketers.

Now, it’s your turn to comment… 

What are some of the most amusing misconceptions about marketers that you’ve encountered?

- and -

Do you agree? Is marketing an art? A science? Or somewhere in between?

Are Marketers More Mad Men or Mad Scientists?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, CMO, Operational Efficiency | Tagged , , , , , , , | Leave a comment

A Data Integration Love-Fest in Vegas

Question: What do American Airlines, Liberty Mutual, Discount Tire and MD Anderson all have in common?

Is it?

Next-Gen Data Integration

Agile Data Integration

a) They are all top in their field.

b) They all view data as critical to their business success.

c) They are all using Agile Data Integration to drive business agility.

d) They have spoken about their Data Integration strategy at Informatica World in Vegas.

Did you reply all of the above? If so then give yourself a Ding Ding Ding. Or shall we say Ka-Ching in honor of our host city?

Indeed Data experts from these companies and many more flocked to Las Vegas for Informatica World.  They shared their enthusiasm for the important role of data in their business.  These industry leaders discussed best practices that facilitate an Agile Data Integration process.

American Airlines recently completed a merger with US Airways, making them the largest airline in the world. In order to service critical reporting requirements for the merged airlines, the enterprise data team undertook a huge Data Integration task.  This effort involved large-scale data migration and included many legacy data sources.  The project required transferring over 4TB of current history data for Day 1 reporting. There is still a major task of integrating multiple combined subject areas in order to give a full picture of combined reporting.

American Airlines architects recommend the use of Data Integration design patterns in order to improve agility.  The architects shared success-factors for merger Data Integration.  They discussed the importance of ownership by leadership from IT and business.  They emphasized the benefit of open and honest communications between teams.  They architects also highlighted the need to identify integration teams and priorities.  Finally the architects discussed the significance of understanding cultural differences and celebrating success.  The team summarized with merger Data Integration lessons learned : Metadata is key, IT and business collaboration is critical, and profiling and access to the data is helpful.

Liberty Mutual, the third largest property and casualty insurer in the US, has grown through acquisitions.  The Data Integration team needs to support this business process.  They have been busy integrating five claim systems into one. They are faced with a large-scale Data Integration challenge. To add to the complexity, their business requires that each phase is completed in one weekend, no data is lost in the process and that all finances balance out at the end of each merge.  Integrating all claims in a single location was critical for smooth processing of insurance claims.  A single system also leads to reduced costs and complexity for support and maintenance.

Liberty Mutual experts recommend a methodology of work preparation, profiling, delivery and validation.  Rinse and repeat. Additionally, the company chose to utilize a visual Data Integration tool. This tool was quick and easy for the team to learn and greatly enhanced development agility.

Discount Tire, the largest independent tire dealer in the USA, shared tips and tricks from migrating legacy data into a new SAP system.  This complex project included data conversion from 50 legacy systems.  The company needs to combine and aggregate data from many systems, including customer, sales, financial and supply chain.  This integrated system helps Discount Tire make key business decisions and remain competitive in a highly competitive space.

Discount Tire has automated their data validation process in development and in production. This reduces testing time, minimizes data defects and increases agility of  development and operations. They have also implemented proactive monitoring in order to accomplish early detection and correction of data problems in production.

MD Anderson Cancer Center is the No. 1 hospital for cancer care in the US according to U.S. News and World Report.  They are pursuing the lofty goal of erasing cancer from existence. Data Integration is playing an important role in this fight against cancer. In order to accomplish their goal, MD Anderson researchers rely on integration of vast amounts of genomic, clinical and pharmaceutical data to facilitate leading-edge cancer research.

MD Anderson experts pursue Agile Data Integration through close collaboration between IT and business stakeholders.  This enables them to meet the data requirements of the business faster and better. They shared that data insights, through metadata management, offer a significant value to the organization. Finally the experts at MD Anderson believe in ‘Map Once, Deploy Anywhere’ in order to accomplish Agile Data Integration.

So let’s recap, Data Integration is helping:

- An airlines continue to serve its customers and run its business smoothly post-merger.

- A tire retail company to procure and provide tires to its customers and maintain leadership

- An insurance company to process claims accurately and in a timely manner, while minimizing costs, and

- A cancer research center to cure cancer.

Not too shabby, right? Data Integration is clearly essential to business success!

So OK, I know, I know… what happens in Vegas, stays in Vegas. Still, this was one love-fest I was compelled to share! Wish you were there. Hopefully you will next year!

To learn more about Agile Data Integration, check out this webinar: Great Data by Design II: How to Get Started with Next-Gen Data Integration

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Integration Platform | Tagged , , , , , , , | Leave a comment

Oh the Data I’ve Seen…

shutterstock_152663261Eighteen months ago, I was sitting in a conference room, nothing remarkable except for the great view down 6th Avenue toward the Empire State Building.  The pre-sales consultant sitting across from me had just given a visually appealing demonstration to the CIO of a multinational insurance corporation.  There were fancy graphics and colorful charts sharply displayed on an iPad and refreshing every few seconds.  The CIO asked how long it had taken to put the presentation together. The consultant excitedly shared that it only took him four to five hours, to which the CIO responded, “Well, if that took you less than five hours, we should be able to get a production version in about two to three weeks, right?”

The facts of the matter were completely different however. The demo, while running with the firm’s own data, had been running from a spreadsheet, housed on the laptop of the consultant and procured after several weeks of scrubbing, formatting, and aggregating data from the CIO’s team; this does not even mention the preceding data procurement process.  And so, as the expert in the room, the voice of reason, the CIO turned to me wanting to know how long it would take to implement the solution.  At least six months, was my assessment.  I had seen their data, and it was a mess. I had seen the flow, not a model architecture and the sheer volume of data was daunting. If it was not architected correctly, the pretty colors and graphs would take much longer to refresh; this was not the answer he wanted to hear.

The advancement of social media, new web experiences and cutting edge mobile technology have driven users to expect more of their applications.  As enterprises push to drive value and unlock more potential in their data, insurers of all sizes have attempted to implement analytical and business intelligence systems.  But here’s the truth: by and large most insurance enterprises are not in a place with their data to make effective use of the new technologies in BI, mobile or social.  The reality is that data cleanliness, fit for purpose, movement and aggregation is being done in a BI when it should be done lower down so that all applications can take advantage of it.

Let’s face it – quality data is important. Movement and shaping of data in the enterprise is important.  Identification of master data and metadata in the enterprise is important and data governance is important.  It brings to mind episode 165, “The Apology”, of the mega-hit show Seinfeld.  Therein George Costanza accuses erstwhile friend Jason Hanky of being a “step skipper”.  What I have seen in enterprise data is “step skipping” as users clamor for new and better experiences, but the underlying infrastructure and data is less than ready for consumption.  So the enterprise bootstraps, duct tapes and otherwise creates customizations where it doesn’t architecturally belong.

Clearly this calls for a better solution; A more robust and architecturally sustainable data ecosystem, which shepherds the data from acquisition through to consumption and all points in between. It also must be attainable by even modestly sized insurance firms.

First, you need to bring the data under your control.  That may mean external data integration, or just moving it from transactional, web, or client-server systems into warehouses, marts or other large data storage schemes and back again.  But remember, the data is in various stages of readiness.  This means that through out of the box or custom cleansing steps the data needs to be processed, enhanced and stored in a way that is more in line with corporate goals for governing the quality of that data.  And this says nothing of the need to change a data normalization factor between source and target.  When implemented as a “factory” approach, the ability to bring new data streams online, integrate them quickly and maintain high standards become small incremental changes and not a ground up monumental task.  Move your data shaping, cleansing, standardization and aggregation further down in the stack and many applications will benefit from the architecture.

Critical to this process is that insurance enterprises need to ensure the data remains secure, private and is managed in accordance with rules and regulations. They must also govern the archival, retention and other portions of the data lifecycle.

At any point in the life of your information, you are likely sending or receiving data from an agent, broker, MGA or service provider, which needs to be processed using the robust ecosystem, described above. Once an effective data exchange infrastructure is implemented, the steps to process the data can nicely complement your setup as information flows to and from your trading partners.

Finally, as your enterprise determines “how” to implement these solutions, you may look to a cloud based system for speed to market and cost effectiveness compared to on-premises solutions.

And don’t forget to register for Informatica World 2014 in Las Vegas, where you can take part in sessions and networking tailored specifically for insurers.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration Platform, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , , , | Leave a comment

Data is the Key to Value-based Healthcare

The transition to value-based care is well underway. From healthcare delivery organizations to clinicians, payers, and patients, everyone feels the impact.  Each has a role to play. Moving to a value-driven model demands agility from people, processes, and technology. Organizations that succeed in this transformation will be those in which:

  • Collaboration is commonplace
  • Clinicians and business leaders wear new hats
  • Data is recognized as an enterprise asset

The ability to leverage data will differentiate the leaders from the followers. Successful healthcare organizations will:

1)      Establish analytics as a core competency
2)      Rely on data to deliver best practice care
3)      Engage patients and collaborate across the ecosystem to foster strong, actionable relationships

Trustworthy data is required to power the analytics that reveal the right answers, to define best practice guidelines and to identify and understand relationships across the ecosystem. In order to advance, data integration must also be agile. The right answers do not live in a single application. Instead, the right answers are revealed by integrating data from across the entire ecosystem. For example, in order to deliver personalized medicine, you must analyze an integrated view of data from numerous sources. These sources could include multiple EMRs, genomic data, data marts, reference data and billing data.

A recent PWC survey showed that 62% of executives believe data integration will become a competitive advantage.  However, a July 2013 Information Week survey reported that 40% of healthcare executives gave their organization only a grade D or F on preparedness to manage the data deluge.

value-based healthcare

What grade would you give your organization?

You can improve your organization’s grade, but it will require collaboration between business and IT.  If you are in IT, you’ll need to collaborate with business users who understand the data. You must empower them with self-service tools for improving data quality and connecting data.  If you are a business leader, you need to understand and take an active role with the data.

To take the next step, download our new eBook, “Potential Unlocked: Transforming healthcare by putting information to work.”  In it, you’ll learn:

  1. How to put your information to work
  2. New ways to govern your data
  3. What other healthcare organizations are doing
  4. How to overcome common barriers

So go ahead, download it now and let me know what you think. I look forward to hearing your questions and comments….oh, and your grade!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Warehousing, Healthcare, Master Data Management | Tagged , , , | Leave a comment

Nine Forms of Analytics Data That Matter the Most

Big Data takes a lot of forms and shapes, and flows in from all over the place – from the Internet, from devices, from machines, and even from cars. In all the data being generated are valuable nuggets of information.

The challenge is being able to find the right data needed, and being able to employ that data to solve a business challenge. What types of data are worthwhile for organizations to capture?

In his new book, Taming the Big Data Tidal Wave: Finding Opportunities in Huge Data Streams With Advanced Analytics, Bill Franks provides an wide array of examples of  the types of data that can best meet the needs of business today. Franks, chief analytics officer with Teradata, points out that his list is not exhaustive, as there is almost an unlimited number of sources that will only keep growing as users discover new ways to apply the data. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged , , , , , , , , , , , , , | Leave a comment