Tag Archives: Data Integration

Empowering Your Organization with 3 Views of Customer Data

According to Accenture – 2013 Global Consumer Pulse Survey, “85 percent of customers are frustrated by dealing with a company that does not make it easy to do business with them, 84 percent by companies promising one thing, but delivering another; and 58 percent are frustrated with inconsistent experiences from channel to channel.”

Consumers expect more from the companies they do business with. In response, many companies are shifting from managing their business based on an application-, account- or product-centric approach to a customer-centric approach. And this is one of the main drivers for master data management (MDM) adoption. According to a VP of Data Strategy & Services at one of the largest insurance companies in the world, “Customer data is the lifeblood of a company that is serious about customer-centricity.” So, better managing customer data, which is what MDM enables you to do, is a key to the success of any customer-centricity initiative. MDM provides a significant competitive differentiation opportunity for any organization that’s serious about improving customer experience. It enables customer-facing teams to assess the value of any customer, at the individual, household or organization level.

Amongst the myriad business drivers of a customer-centricity initiative, key benefits include delivering an enhanced customer experience – leading to higher customer loyalty and greater share of wallet, more effective cross-sell and upsell targeting to increase revenue, and improved regulatory compliance.

To truly achieve all the benefits expected from a customer-first, customer-centric strategy, we need to look beyond the traditional approaches of data quality and MDM implementations, which often consider only one foundational (yet important) aspect of the technology solution. The primary focus has always been to consolidate and reconcile internal sources of customer data with the hope that this information brought under a single umbrella of a database and a service layer will provide the desired single view of customer. But in reality, this data integration mindset misses the goal of creating quality customer data that is free from duplication and enriched to deliver significant value to the business.

Today’s MDM implementations need to take their focus beyond mere data integration to be successful. In the following section, I will explain 3 levels of customer views which can be built incrementally to be able to make most out of your MDM solution. When implemented fully, these customer views act as key ingredients for improving the execution of your customer-centric business functions.

Trusted Customer View

The first phase of the solution should cover creation of trusted customer view. This view empowers your organization with an ability to see complete, accurate and consistent customer information.

In this stage, you take the best information from all the applications and compile it into a single golden profile. You not only use data integration technology for this, but also employ data quality tools to ensure the correctness and completeness of the customer data. Advanced matching, merging and trust framework are used to derive the most up-to-date information about your customer. You also guarantee that the golden record you create is accessible to business applications and systems of choice so everyone who has the authority can leverage the single version of the truth.

At the end of this stage, you will be able to clearly say John D. who lives at 123 Main St and Johnny Doe at 123 Main Street, who are both doing business with you, are not really two different individuals.

Customer data

Customer Relationships View

The next level of visibility is about providing a view into the customer’s relationships. It takes advantage of the single customer view and layers in all valuable family and business relationships as well as account and product information. Revealing these relationships is where the real value of multidomain MDM technology comes into action.

At the end of this phase, you not only see John Doe’s golden profile, but the products he has. He might have a personal checking from the Retail Bank, a mortgage from the Mortgage line of business, and brokerage and trust account with the Wealth Management division. You can see that John has his own consulting firm. You can see he has a corporate credit card and checking account with the Commercial division under the name John Doe Consulting Company.

At the end of this phase, you will have a consolidated view of all important relationship information that will help you evaluate the true value of each customer to your organization.

Customer Interactions and Transactions View

The third level of visibility is in the form of your customer’s interactions and transactions with your organization.

During this phase, you tie transactional information, historical data and social interactions your customer has with your organization to further enhance the system. Building this view provides you a whole new world of opportunities because you can see everything related to your customer in one central place. Once you have this comprehensive view, when John Doe calls your call center, you know how valuable he is to your business, which product he just bought from you (transactional data), what is the problem he is facing (social interactions).

A widely accepted rule of thumb holds that 80 percent of your company’s future revenue will come from 20 percent of your existing customers.  Many organizations are trying to ensure they are doing everything they can to retain existing customers and grow wallet share. Starting with Trusted Customer View is first step towards making your existing customers stay. Once you have established all three states discussed here, you can arm your customer-facing teams with a comprehensive view of customers so they can:

  • Deliver the best customer experiences possible at every touch point,
  • Improve customer segmentation for tailored offers, boost marketing and sales productivity,
  • Increase cross-sell and up-sell success,  and
  • Streamline regulatory reporting.

Achieving the 3 views discussed here requires a solid data management platform. You not only need an industry leading multidomain MDM technology, but also require tools which will help you integrate data, control the quality and connect all the dots. These technologies should work together seamlessly to make your implementation easier and help you gain rapid benefits. Therefore, choose your data management platform. To know more about MDM vendors, read recently released Gartner’s Magic Quadrant for MDM of Customer Data Solutions.

-Prash (@MDMGeek)

www.mdmgeek.com

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Master Data Management | Tagged , , , , | Leave a comment

Considering Data Integration? Also Consider Data Security Best Practices

Considering Data Integration? Also Consider Data Security

Consider Data Security Best Practices

It seems you can’t go a week without hearing about some major data breach, many of which make front-page news.  The most recent was from the State of California, that reported a large number of data breaches in that state alone.  “The number of personal records compromised by data breaches in California surged to 18.5 million in 2013, up more than six times from the year before, according to a report published [late October 2014] by the state’s Attorney General.”

California reported a total of 167 data breaches in 2013, which is up 28 percent from the 2012.  Two major data breaches caused most of this uptick, including the Target attack that was reported in December 2013, and the LivingSocial attack that occurred in April 2013.  This year, you can add the Home Depot data breach to that list, as well as the recent breach at the US Post Office.

So, what the heck is going on?  And how does this new impact data integration?  Should we be concerned, as we place more and more data on public clouds, or within big data systems?

Almost all of these breaches were made possible by traditional systems with security technology and security operations that fell far enough behind that outside attackers found a way in.  You can count on many more of these attacks, as enterprises and governments don’t look at security as what it is; an ongoing activity that may require massive and systemic changes to make sure the data is properly protected.

As enterprises and government agencies stand up cloud-based systems, and new big data systems, either inside (private) or outside (public) of the enterprise, there are some emerging best practices around security that those who deploy data integration should understand.  Here are a few that should be on the top of your list:

First, start with Identity and Access Management (IAM) and work your way backward.  These days, most cloud and non-cloud systems are complex distributed systems.  That means IAM is is clearly the best security model and best practice to follow with the emerging use of cloud computing.

The concept is simple; provide a security approach and technology that enables the right individuals to access the right resources, at the right times, for the right reasons.  The concept follows the principle that everything and everyone gets an identity.  This includes humans, servers, APIs, applications, data, etc..  Once that verification occurs, it’s just a matter of defining which identities can access other identities, and creating policies that define the limits of that relationship.

Second, work with your data integration provider to identify solutions that work best with their technology.  Most data integration solutions address security in one way, shape, or form.  Understanding those solutions is important to secure data at rest and in flight.

Finally, splurge on monitoring and governance.  Many of the issues around this growing number of breaches exist with the system managers’ inability to spot and stop attacks.  Creative approaches to monitoring system and network utilization, as well as data access, will allow those in IT to spot most of the attacks and correct the issues before the ‘go nuclear.’  Typically, there are an increasing number of breach attempts that lead up to the complete breach.

The issue and burden of security won’t go away.  Systems will continue to move to public and private clouds, and data will continue to migrate to distributed big data types of environments.  And that means the need data integration and data security will continue to explode.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Privacy, Data Security | Tagged , , , | Leave a comment

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

The National Institutes of Health announced new grants to develop big data technologies and strategies.

“The NIH multi-institute awards constitute an initial investment of nearly $32 million in fiscal year 2014 by NIH’s Big Data to Knowledge (BD2K) initiative and will support development of new software, tools and training to improve access to these data and the ability to make new discoveries using them, NIH said in its announcement of the funding.”

The grants will address issues around Big Data adoption, including:

  • Locating data and the appropriate software tools to access and analyze the information.
  • Lack of data standards, or low adoption of standards across the research community.
  • Insufficient polices to facilitate data sharing while protecting privacy.
  • Unwillingness to collaborate that limits the data’s usefulness in the research community.

Among the tasks funded is the creation of a “Perturbation Data Coordination and Integration Center.”  The center will provide support for data science research that focuses on interpreting and integrating data from different data types and databases.  In other words, it will make sure the data moves to where it should move, in order to provide access to information that’s needed by the research scientist.  Fundamentally, it’s data integration practices and technologies.

This is very interesting from the standpoint that the movement into big data systems often drives the reevaluation, or even new interest in data integration.  As the data becomes strategically important, the need to provide core integration services becomes even more important.

The project at the NIH will be interesting to watch, as it progresses.  These are the guys who come up with the new paths to us being healthier and living longer.  The use of Big Data provides the researchers with the advantage of having a better understanding of patterns of data, including:

  • Patterns of symptoms that lead to the diagnosis of specific diseases and ailments.  Doctors may get these data points one at a time.  When unstructured or structured data exists, researchers can find correlations, and thus provide better guidelines to physicians who see the patients.
  • Patterns of cures that are emerging around specific treatments.  The ability to determine what treatments are most effective, by looking at the data holistically.
  • Patterns of failure.  When the outcomes are less than desirable, what seems to be a common issue that can be identified and resolved?

Of course, the uses of big data technology are limitless, when considering the value of knowledge that can be derived from petabytes of data.  However, it’s one thing to have the data, and another to have access to it.

Data integration should always be systemic to all big data strategies, and the NIH clearly understands this to be the case.  Thus, they have funded data integration along with the expansion of their big data usage.

Most enterprises will follow much the same path in the next 2 to 5 years.  Information provides a strategic advantage to businesses.  In the case of the NIH, it’s information that can save lives.  Can’t get much more important than that.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Data Integration, Data Integration | Tagged , , , | Leave a comment

Getting Value Out of Data Integration

The post is by Philip Howard, Research Director, Bloor Research.

Getting value out of Data Integration

Live Bloor Webinar, Nov 5

One of the standard metrics used to support buying decisions for enterprise software is total cost of ownership. Typically, the other major metric is functionality. However functionality is ephemeral. Not only does it evolve with every new release but while particular features may be relevant to today’s project there is no guarantee that those same features will be applicable to tomorrow’s needs. A broader metric than functionality is capability: how suitable is this product for a range of different project scenarios and will it support both simple and complex environments?

Earlier this year Bloor Research published some research into the data integration market, which exactly investigated these issues: how often were tools reused, how many targets and sources were involved, for what sort of projects were products deemed suitable? And then we compared these with total cost of ownership figures that we also captured in our survey. I will be discussing the results of our research live with Kristin Kokie, who is the interim CIO of Informatica, on Guy Fawkes’ day (November 5th). I don’t promise anything explosive but it should be interesting and I hope you can join us. The discussions will be vendor neutral (mostly: I expect that Kristin has a degree of bias).

To Register for the Webinar, click Here.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Integration Platform, Data Migration | Tagged , , | Leave a comment

Not Just For Play, Western Union Puts Hadoop to Work

Not Just For Play, Western Union Puts Hadoop to Work

Put Hadoop to Work

Everyone’s talking about Hadoop for empowering analysts to quickly experiment, discover, and predict new insights.  But Hadoop isn’t just for play.  Leading enterprises like Western Union are putting Hadoop to work on their most mission-critical data pipelines.  Last week at Strata + Hadoop World, we had a chance to hear how Western Union uses Cloudera’s Hadoop-based enterprise data hub and Informatica to deliver faster, simpler, and cleaner data pipelines.

Western Union, a multi-billion dollar global financial services and communications company, data is recognized as their core asset.  Like many other financial services firms, Western Union thrives on data for both harvesting new business opportunities and managing its internal operations.  And like many other enterprises, Western Union isn’t just ingesting data from relational data sources.  They are mining a number of new information-rich sources like clickstream data and log data.  With Western Union’s scale and speed demands, the data pipeline just has to work so they can optimize customer experience across multiple channels (e.g. retail, online, mobile, etc.) to grow the business.

Let’s level set on how important scale and speed is to Western Union.  Western Union processes more than 29 financial transactions every second.  Analytical performance simply can’t be the bottleneck for extracting insights from this blazing velocity of data.  So to maximize the performance of their data warehouse appliance, Western Union offloaded data quality and data integration workloads onto a Cloudera Hadoop cluster.  Using the Informatica Big Data Edition, Western Union capitalized on the performance and scalability of Hadoop while unleashing the productivity of their Informatica developers.

Informatica Big Data Edition enables data driven organizations to profile, parse, transform, and cleanse data on Hadoop with a simple visual development environment, prebuilt transformations, and reusable business rules.  So instead of hand coding one-off scripts, developers can easily create mappings without worrying about the underlying execution platform.  Raw data can be easily loaded into Hadoop using Informatica Data Replication and Informatica’s suite of PowerExchange connectors.  After the data is prepared, it can be loaded into a data warehouse appliance for supporting high performance analysis.  It’s a win-win solution for both data managers and data consumers.  Using Hadoop and Informatica, the right workloads are processed by the right platforms so that the right people get the right data at the right time.

Using Informatica’s Big Data solutions, Western Union is transforming the economics of data delivery, enabling data consumers to create safer and more personalized experiences for Western Union’s customers.  Learn how the Informatica Big Data Edition can help put Hadoop to work for you.  And download a free trial to get started today!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Migration, Hadoop | Tagged , , | Leave a comment

More Evidence That Data Integration Is Clearly Strategic

Data Integration Is Clearly Strategic

Data Integration Is Strategic

A recent study from Epicor Software Corporation surveyed more than 300 IT and business decision-makers.  The study results highlighted the biggest challenges and opportunities facing Australian businesses. The independent research report “From Business Processes to Product Distribution” was based upon a survey of Australian organizations with more than 20 employees.

Key findings from the report include:

  • 65% of organizations cite data processing and integration as hampering distribution capability, with nearly half claiming their existing software and ERP is not suitable for distribution.
  • Nearly two-thirds of enterprises have some form of distribution process, involving products or services.
  • More than 80% of organizations have at least some problem with product or service distribution.
  • More than 50% of CIOs in organizations with distribution processes believe better distribution would increase revenue and optimize business processes, with a further 38% citing reduced operating costs.

The core findings: “With better data integration comes better automation and decision making.”

This report is one of many I’ve seen over the years that come to the same conclusion.  Most of those involved with the operations of the business don’t have access to key data points they need, thus they can’t automate tactical decisions, and also cannot “mine” the data, in terms of understanding the true state of the business.

The more businesses deal with building and moving products, the more data integration becomes an imperative value.  As stated in this survey, as well as others, the large majority cite “data processing and integration as hampering distribution capabilities.”

Of course, these issues goes well beyond Australia.  Most enterprises I’ve dealt with have some gap between the need to share key business data to support business processes, and decision support, and what current exists in terms of data integration capabilities.

The focus here is on the multiple values that data integration can bring.  This includes:

  • The ability to track everything as it moves from manufacturing, to inventory, to distribution, and beyond.  You to bind these to core business processes, such as automatic reordering of parts to make more products, to fill inventory.
  • The ability to see into the past, and to see into the future.  The emerging approaches to predictive analytics allow businesses to finally see into the future.  Also, to see what went truly right and truly wrong in the past.

While data integration technology has been around for decades, most businesses that both manufacture and distribute products have not taken full advantage of this technology.  The reasons range from perceptions around affordability, to the skills required to maintain the data integration flow.  However, the truth is that you really can’t afford to ignore data integration technology any longer.  It’s time to create and deploy a data integration strategy, using the right technology.

This survey is just an instance of a pattern.  Data integration was considered optional in the past.  With today’s emerging notions around the strategic use of data, clearly, it’s no longer an option.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Integration, Data Integration Platform, Data Quality | Tagged , , , | Leave a comment

Informatica’s Hadoop Connectivity Reaches for the Clouds

The Informatica Cloud team has been busy updating connectivity to Hadoop using the Cloud Connector SDK.  Updated connectors are available now for Cloudera and Hortonworks and new connectivity has been added for MapR, Pivotal HD and Amazon EMR (Elastic Map Reduce).

Informatica Cloud’s Hadoop connectivity brings a new level of ease of use to Hadoop data loading and integration.  Informatica Cloud provides a quick way to load data from popular on premise data sources and apps such as SAP and Oracle E-Business, as well as SaaS apps, such as Salesforce.com, NetSuite, and Workday, into Hadoop clusters for pilots and POCs.  Less technical users are empowered to contribute to enterprise data lakes through the easy-to-use Informatica Cloud web user interface.

Hadoop

Informatica Cloud’s rich connectivity to a multitude of SaaS apps can now be leveraged with Hadoop.  Data from SaaS apps for CRM, ERP and other lines of business are becoming increasingly important to enterprises. Bringing this data into Hadoop for analytics is now easier than ever.

Users of Amazon Web Services (AWS) can leverage Informatica Cloud to load data from SaaS apps and on premise sources into EMR directly.  Combined with connectivity to Amazon Redshift, Informatica Cloud can be used to move data into EMR for processing and then onto Redshift for analytics.

Self service data loading and basic integration can be done by less technical users through Informatica Cloud’s drag and drop web-based user interface.  This enables more of the team to contribute to and collaborate on data lakes without having to learn Hadoop.

Bringing the cloud and Big Data together to put the potential of data to work – that’s the power of Informatica in action.

Free trials of the Informatica Cloud Connector for Hadoop are available here: http://www.informaticacloud.com/connectivity/hadoop-connector.html

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Services, Hadoop, SaaS | Tagged , , , | Leave a comment

BCBS 239 – What Are Banks Talking About?

I recently participated on an EDM Council panel on BCBS 239 earlier this month in London and New York. The panel consisted of Chief Risk Officers, Chief Data Officers, and information management experts from the financial industry. BCBS 239 set out 14 key principles requiring banks aggregate their risk data to allow banking regulators to avoid another 2008 crisis, with a deadline of Jan 1, 2016.  Earlier this year, the Basel Committee on Banking Supervision released the findings from a self-assessment from the Globally Systemically Important Banks (GISB’s) in their readiness to 11 out of the 14 principles related to BCBS 239. 

Given all of the investments made by the banking industry to improve data management and governance practices to improve ongoing risk measurement and management, I was expecting to hear signs of significant process. Unfortunately, there is still much work to be done to satisfy BCBS 239 as evidenced from my findings. Here is what we discussed in London and New York.

  • It was clear that the “Data Agenda” has shifted quite considerably from IT to the Business as evidenced by the number of risk, compliance, and data governance executives in the room.  Though it’s a good sign that business is taking more ownership of data requirements, there was limited discussions on the importance of having capable data management technology, infrastructure, and architecture to support a successful data governance practice. Specifically capable data integration, data quality and validation, master and reference data management, metadata to support data lineage and transparency, and business glossary and data ontology solutions to govern the terms and definitions of required data across the enterprise.
  • With regard to accessing, aggregating, and streamlining the delivery of risk data from disparate systems across the enterprise and simplifying the complexity that exists today from point to point integrations accessing the same data from the same systems over and over again creating points of failure and increasing the maintenance costs of supporting the current state.  The idea of replacing those point to point integrations via a centralized, scalable, and flexible data hub approach was clearly recognized as a need however, difficult to envision given the enormous work to modernize the current state.
  • Data accuracy and integrity continues to be a concern to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Many in the room acknowledged heavy reliance on manual methods implemented over the years and investing in Automating data integration and onboarding risk data from disparate systems across the enterprise is important as part of Principle 3 however, much of what’s in place today was built as one off projects against the same systems accessing the same data delivering it to hundreds if not thousands of downstream applications in an inconsistent and costly way.
  • Data transparency and auditability was a popular conversation point in the room as the need to provide comprehensive data lineage reports to help explain how data is captured, from where, how it’s transformed, and used remains a concern despite advancements in technical metadata solutions that are not integrated with their existing risk management data infrastructure
  • Lastly, big concerns regarding the ability to capture and aggregate all material risk data across the banking group to deliver data by business line, legal entity, asset type, industry, region and other groupings, to support identifying and reporting risk exposures, concentrations and emerging risks.  This master and reference data challenge unfortunately cannot be solved by external data utility providers due to the fact the banks have legal entity, client, counterparty, and securities instrument data residing in existing systems that require the ability to cross reference any external identifier for consistent reporting and risk measurement.

To sum it up, most banks admit they have a lot of work to do. Specifically, they must work to address gaps across their data governance and technology infrastructure.BCBS 239 is the latest and biggest data challenge facing the banking industry and not just for the GSIB’s but also for the next level down as mid-size firms will also be required to provide similar transparency to regional regulators who are adopting BCBS 239 as a framework for their local markets.  BCBS 239 is not just a deadline but the principles set forth are a key requirement for banks to ensure they have the right data to manage risk and ensure transparency to industry regulators to monitor system risk across the global markets. How ready are you?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Aggregation, Data Governance, Data Services | Tagged , , , | Leave a comment