Tag Archives: Data Governance

Garbage In, Garbage Out? Don’t Take Data for Granted in Analytics Initiatives!

Cant trust data_1The verdict is in. Data is now broadly perceived as a source of competitive advantage. We all feel the heat to deliver good data. It is no wonder organizations view Analytics initiatives as highly strategic. But the big question is, can you really trust your data? Or are you just creating pretty visualizations on top of bad data?

We also know there is a shift towards self-service Analytics. But did you know that according to Gartner, “through 2016, less than 10% of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business”?1 This means that you may actually show up at your next big meeting and have data that contradicts your colleague’s data.  Perhaps you are not working off of the same version of the truth. Maybe you have siloed data on different systems and they are not working in concert? Or is your definition of ‘revenue’ or ‘leads’ different from that of your colleague’s?

So are we taking our data for granted? Are we just assuming that it’s all available, clean, complete, integrated and consistent?  As we work with organizations to support their Analytics journey, we often find that the harsh realities of data are quite different from perceptions. Let’s further investigate this perception gap.

For one, people may assume they can easily access all data. In reality, if data connectivity is not managed effectively, we often need to beg borrow and steal to get the right data from the right person. If we are lucky. In less fortunate scenarios, we may need to settle for partial data or a cheap substitute for the data we really wanted. And you know what they say, the only thing worse than no data is bad data. Right?

Another common misperception is: “Our data is clean. We have no data quality issues”.  Wrong again.  When we work with organizations to profile their data, they are often quite surprised to learn that their data is full of errors and gaps.  One company recently discovered within one minute of starting their data profiling exercise, that millions of their customer records contained the company’s own address instead of the customers’ addresses… Oops.

Another myth is that all data is integrated.  In reality, your data may reside in multiple locations: in the cloud, on premise, in Hadoop and on mainframe and anything in between. Integrating data from all these disparate and heterogeneous data sources is not a trivial task, unless you have the right tools.

And here is one more consideration to mull over. Do you find yourself manually hunting down and combining data to reproduce the same ad hoc report over and over again? Perhaps you often find yourself doing this in the wee hours of the night? Why reinvent the wheel? It would be more productive to automate the process of data ingestion and integration for reusable and shareable reports and Analytics.

Simply put, you need great data for great Analytics. We are excited to host Philip Russom of TDWI in a webinar to discuss how data management best practices can enable successful Analytics initiatives. 

And how about you?  Can you trust your data?  Please join us for this webinar to learn more about building a trust-relationship with your data!

  1. Gartner Report, ‘Predicts 2015: Power Shift in Business Intelligence and Analytics Will Fuel Disruption’; Authors: Josh Parenteau, Neil Chandler, Rita L. Sallam, Douglas Laney, Alan D. Duncan; Nov 21 2014
Share
Posted in Architects, Business/IT Collaboration, Data Governance, Data Integration, Data Warehousing | Tagged , , , , , , | 1 Comment

Is Your Data Ready to Maximize Value from Your CRM Investments?

Is Your Data Ready to Maximize Value from Your CRM Investments?

Is Your Data Ready to Maximize Value from Your CRM Investments?

A friend of mine recently reached out to me about some advice on CRM solutions in the market.  Though I have not worked for a CRM vendor, I’ve had both direct experience working for companies that implemented such solutions to my current role interacting with large and small organizations regarding their data requirements to support ongoing application investments across industries. As we spoke, memories started to surface when he and I had worked on implementing Salesforce.com (SFDC) many years ago. Memories that we wanted to forget but important to call out given his new situation.

We worked together for a large mortgage lending software vendor selling loan origination solutions to brokers and small lenders mainly through email and snail mail based marketing.  He was responsible for Marketing Operations, and I ran Product Marketing. The company looked at Salesforce.com to help streamline our sales operations  and improve how we marketed and serviced our customers.  The existing CRM system was from the early 90’s and though it did what the company needed it to do, it was heavily customized, costly to operate, and served its life. It was time to upgrade, to help grow the business, improve business productivity, and enhance customer relationships.

After 90 days of rolling out SFDC, we ran into some old familiar problems across the business.  Sales reps continued to struggle in knowing who was a current customer using our software, marketing managers could not create quality mailing lists for prospecting purposes, and call center reps were not able to tell if the person on the other end was a customer or prospect. Everyone wondered why this was happening given we adopted the best CRM solution in the market.  You can imagine the heartburn and ulcers we all had after making such a huge investment in our new CRM solution.  C-Level executives were questioning our decisions and blaming the applications. The truth was, the issues were not related to SFDC but the data that we had migrated into the system and the lack proper governance and a capable information architecture to support the required data management integration between systems that caused these significant headaches.

During the implementation phase, IT imported our entire customer database of 200K+ unique customer entities from the old system to SFDC. Unfortunately, the mortgage industry was very transient and on average there were roughly 55K licenses mortgage brokers and lenders in the market and because no one ever validated the accuracy of who was really a customer vs. someone who had ever bought out product, we had a serious data quality issues including:

  • Trial users  who purchased evaluation copies of our products that expired were tagged as current customers
  • Duplicate records caused by manual data entry errors consisting of companies with similar but entered slightly differently with the same business address were tagged as unique customers
  • Subsidiaries of parent companies in different parts of the country that were tagged again as a unique customer.
  • Lastly, we imported the marketing contact database of prospects which were incorrectly accounted for as a customer in the new system

We also failed to integrate real-time purchasing data and information from our procurement systems for sales and support to handle customer requests. Instead of integrating that data in real-time with proper technology, IT had manually loaded these records at the end of the week via FTP resulting in incorrect billing information, statement processing, and a ton of complaints from customers through our call center. The price we paid for not paying attention to our data quality and integration requirements before we rolled out Salesforce.com was significant for a company of our size. For example:

  • Marketing got hit pretty hard. Each quarter we mailed evaluation copies of new products to our customer database of 200K, each costing the company $12 per to produce and mail. Total cost = $2.4M annually.  Because we had such bad data,  we would get 60% of our mailings returned because of invalid addresses or wrong contact information. The cost of bad data to marketing = $1.44M annually.
  • Next, Sales struggled miserably when trying to upgrade a customer by running cold call campaigns using the names in the database. As a result, sales productivity dropped by 40% and experienced over 35% sales turnover that year. Within a year of using SFDC, our head of sales got let go. Not good!
  • Customer support used SFDC to service customers, our average all times were 40 min per service ticket. We had believed that was “business as usual” until we surveyed what reps were spending their time each day and over 50% said it was dealing with billing issues caused by bad contact information in the CRM system.

At the end of our conversation, this was my advice to my friend:

  • Conduct a data quality audit of the systems that would interact with the CRM system. Audit how complete your critical master and reference data is including names, addresses, customer ID, etc.
  • Do this before you invest in a new CRM system. You may find that much of the challenges faced with your existing applications may be caused by the data gaps vs. the legacy application.
  • If they had a data governance program, involve them in the CRM initiative to ensure they understand what your requirements are and see how they can help.
  • However, if you do decide to modernize, collaborate and involve your IT teams, especially between your Application Development teams and your Enterprise Architects to ensure all of the best options are considered to handle your data sharing and migration needs.
  • Lastly, consult with your technology partners including your new CRM vendor, they may be working with solution providers to help address these data issues as you are probably not the only one in this situation.

Looking Ahead!

CRM systems have come a long way in today’s Big Data and Cloud Era. Many firms are adopting more flexible solutions offered through the Cloud like Salesforce.com, Microsoft Dynamics, and others. Regardless of how old or new, on premise or in the cloud, companies invest in CRM not to just serve their sales teams or increase marketing conversion rates, but to improve your business relationship with your customers. Period! It’s about ensuring you have data in these systems that is trustworthy, complete, up to date, and actionable to improve customer service and help drive sales of new products and services to increase wallet share. So how to do you maximize your business potential from these critical business applications?

Whether you are adopting your first CRM solution or upgrading an existing one, keep in mind that Customer Relationship Management is a business strategy, not just a software purchase. It’s also about having a sound and capable data management and governance strategy supported by people, processes, and technology to ensure you can:

  • Access and migrate data from old to new avoiding develop cost overruns and project delays.
  • Identify, detect, and distribute transactional and reference data from existing systems into your front line business application in real-time!
  • Manage data quality errors including duplicate records, invalid names and contact information due to proper data governance and proactive data quality monitoring and measurement during and after deployment
  • Govern and share authoritative master records of customer, contact, product, and other master data between systems in a trusted manner.

Will your data be ready for your new CRM investments?  To learn more:

Follow me on Twitter @DataisGR8

Share
Posted in Architects, Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Cloud Data Management, CMO, Customer Acquisition & Retention, SaaS | Tagged , , , , , , , , , | Leave a comment

There are Three Kinds of Lies: Lies, Damned lies, and Data

Lies, Damned lies, and Data

Lies, Damned lies, and Data

The phrase Benjamin Disraeli used in the 19th century was: There are three kinds of lies: lies, damned lies, and statistics.

Not so long ago, Google created a Web site to figure out just how many people had influenza. How they did this was by tracking “flu-related search queries”, “location of the query,” and applied it to an estimation algorithm. According to the website, at the flu season’s peak in January, nearly 11 percent of the United States population may have influenza. This means that nearly 44 million of us will have had the flu or flu-like symptoms. In its weekly report the Centers for Disease Control and Prevention put this at 5.6%, which means that less than 23 million of us actually went to the doctor’s office to be tested for flu or to get a flu-shot.

Now, imagine if I were a drug manufacturer. There is a theory about what went wrong. The problems may be due to widespread media coverage of this year’s flu season. Then add social media, which helped news of the flu spread quicker than the virus itself. In other words, the algorithm is looking only at the numbers, not at the context of the search results.

In today’s digitally connected world, data is everywhere: in our phones, search queries, friendships, dating profiles, cars, food, and reading habits. Almost everything we touch is part of a larger data set. The people and companies that interpret the data may fail to apply background and outside conditions to the numbers they capture.

Now, while we build our big data repositories, we have to spend some time to explain how we collected the data and under what context.

Twitter @bigdatabeat

Share
Posted in Big Data, Cloud Data Management, Data Governance, Data Transformation, Data Warehousing, Hadoop | Tagged , , , , | Leave a comment

Good Corporate Governance Is Built Upon Good Information and Data Governance

Good Corporate Governance

Good Corporate Governance

As you may know, COSO provides the overarching enterprise framework for corporate governance. This includes operations, reporting, and compliance. A key objective for COSO is the holding of individuals accountable for their internal control responsibilities. The COSO process typically starts by accessing risks and developing sets of control activities to mitigate discovered risks.

On an ongoing basis, organizations then need as well to generate relevant, quality information to evaluate the functioning of established internal controls. And finally they need to select, develop, and perform ongoing evaluations to ascertain whether the internal controls are present and functioning appropriately. Having said all of this, the COSO framework will not be effective without first having established effective Information and Data Governance.

So you might be asking yourself as a corporate officer why should you care about this topic anyway. Isn’t this the job of the CIO or that new person, the CDO? The answer is no. Today’s enterprises are built upon data and analytics. The conundrum here is that “you can’t be analytical without data and you can’t be really good at analytics without really good data”. (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 23). What enterprises tell us they need is great data—data which is clean, safe, and increasingly connected. And yes, the CIO is going to make this happen for you, but they are not going to do this appropriately without the help of data stewards that you select from your business units. These stewards need to help the CIO or CDO determine what data matters to the enterprise. What data should be secured? And finally, they will determine what data, information, and knowledge will drive the business right to win on an ongoing basis.

So now that you know why your involvement matters, I need to share that this control activity is managed by a supporting standard to COSO, COBIT 5. To learn specifically about what COBIT 5 recommends for Information and Data Governance, please click and read an article from the latest COBIT Focus entitled “Using COBIT 5 to Deliver Information and Data Governance”.

Twitter: @MylesSuer

Share
Posted in CIO, Data Governance | Tagged , , | Leave a comment

Get Ready for the Age of Engagement

Get Ready for the Age of Engagement

Are You Ready for the Age of Engagement?

It’s no secret that Informatica is and always has been singularly focused on helping organizations, large and small, understand and adopt a “Data First” perspective. In 2015, we’ll continue to deliver more solutions, to empower our customers to derive more value from the massive explosion of data they are coping with.

Data is the lifeblood of organizations and Informatica is the vendor to deliver clean, safe, and connected data to build stronger businesses, create more value and deliver meaningful engagement with partners and customers.

Informatica is delivering great data – that is ready for everything – to our customers. Let me explain how.

Today we are entering the Age of Engagement.

What do I mean by this? Enterprises today are confronted with an unparalleled opportunity to put massive amounts of data, structured and unstructured, to work on optimizing their businesses. Organizations can streamline processes through consolidated applications, build efficiencies through real-time collaboration and decision-making, and improve customer relationships through data enabled interactions and 24/7 support.

Today’s business imperatives of mobility, cloud computing and big data analytics are driven by data from multiple sources – both inside and outside the four walls of the enterprise.

Data is the fuel powering the new Age of Engagement. It runs processes, powers machines, enables sensors and customizes user experiences.

New ages bring new challenges, and new challenges create new opportunities. We, at Informatica, are perfectly positioned to enable our customers to take advantage of the opportunities presented by the Age of Engagement. We understand the unique data requirements of today’s enterprise – and, as a result, provide expertise and offerings to enable them to get ready in the five business-critical areas of:

  • Next-generation analytics
  • Total customer relationship
  • Application consolidation and optimization
  • Cloud acceleration
  • Holistic data governance

These are the critical touch points, enabled by data, behind every single business and IT initiative. We facilitate the evolution from legacy transactional IT infrastructures and processes. Our core areas of expertise enable today’s enterprises to become Data Ready Enterprises. We provide the refined fuel to power engagement.

I’d like to take a moment to highlight the very significant business benefits of these five touch points.

For example, Data Ready Enterprises today will be Decision-Ready because they are able to take advantage of data analytics. They will be Customer-Ready and prepared to comprehensively manage their total customer relationships. They will by Application-Ready through application consolidation and optimization – making sure the right applications access the right data at the right time. They will be Cloud-Ready and able to accelerate the transition to the real-time data driven collaboration. And, not least, they will be Regulation-Ready to lift the burden of compliance with industry-specific and other regulations.

Every organization has its own unique data signature with the potential to build smarter systems, more intuitive services and better products. We empower organizations by delivering them great data that is ready for everything — enabling them to be ready in the ways that matter most.

Or to summarize our product and market leadership position more simply, we make data ready to use. Informatica enables great data. Data that’s clean, safe and connected. These are the business-critical Data Ready Enterprise benefits delivered by our Intelligent Data Platform.

So, our message to our customers in 2015 is three-fold…

  1. Get ready for to be a Data Ready Enterprise in the Age of Engagement.
  2. Get ready to put your data to use.
  3. And get ready to put your potential to work.
Share
Posted in Data Governance, Intelligent Data Platform | Tagged , | Leave a comment

8 Information Quality Predictions for 2015 And Beyond

Information Quality Predictions

Information Quality Predictions

Andy Hayler of Information Difference wrote in October last year that it’s been 10 years since the master data management (MDM) industry emerged. Andy sees MDM technology maturing and project success rates rising. He concluded that MDM has moved past its infancy and has a promising future as it is approaching its teenage years.

The last few months have allowed me to see MDM, data quality and data governance from a completely different perspective. I sat with other leaders here at Informatica, analysts who focus on information quality and spent time talking to our partners who work closely with customers on data management initiatives. As we collectively attempted to peer into the crystal ball and forecast what will be hot – and what will not – in this year and beyond for MDM and data quality, here are few top predictions that stood out.

1. MDM will become a single platform for all master entities
“The classical notion of boundaries that existed where we would say, this is MDM versus this is not MDM is going to get blurred,” says Dennis Moore – SVP, Information Quality Solutions (IQS), Informatica. “Today, we master a fairly small number of attributes in MDM. Rather than only mastering core attributes, we need to master business level entities, like customer, product, location, assets, things, etc., and combine all relevant attributes into a single platform which can be used to develop new “data fueled” applications. This platform will allow mastering of data, aggregate data from other sources, and also syndicate that data out into other systems.”

Traditionally MDM was an invisible hub that was connected to all the spokes. Instead, Dennis says – “MDM will become more visible and will act as an application development platform.”

2. PIM is becoming more integrated environment that covers all information about products and related data in single place
More and more customers want to have single interface which will allow them to manage all product information. Along with managing a product’s length, width, height, color, cost etc., they probably want to see data about the history, credit rating, previous quality rating, sustainability scorecard, returns, credits and so on. Dennis says – “All the product information in one place helps make better decisions with embedded analytics, giving answers to questions such as:

  • What were my sales last week?
  • Which promotions are performing well and poorly?
  • Which suppliers are not delivering on their SLAs?
  • Which stores aren’t selling according to plan?
  • How are the products performing in specific markets?”

Essentially, PIM will become a sovereign supplier of product data that goes in your catalog and ecommerce system that will be used by merchandisers, buyers, and product and category managers. It will become the buyer’s guide and a desktop for the person whose job is to figure out how to effectively promote products to meet sales targets.

3. MDM will become an integral part of big data analytics projects
“Big data analytics suffers from the same challenges as traditional data warehouses – bad data quality produces sub-optimal intelligence. MDM has traditionally enabled better analysis and reporting with high quality master data. Big data analytics will also immensely benefit from MDM’s most trustworthy information.” – Said Ravi Shankar – VP of Product Marketing, MDM, Informatica

Naveen Sharma who heads Enterprise Data Management practice at Cognizant reemphasized what I heard from Dennis. He says – “With big data and information quality coming together, some of the boundaries between a pure MDM system and a pure analytical system will start to soften”. Naveen explains – “MDM is now seen as an integral part of big data analytics projects and it’s a huge change from a couple of years ago. Two of large retailers we work with are going down the path of trying to bring not only the customer dimension but the associated transactional data to derive meaning into an extended MDM platform. I see this trend continuing in 2015 and beyond with other verticals as well.”

4. Business requirements are leading to the creation of solutions
There are several business problems being solved by MDM, such as improving supplier spend management and collaboration with better supplier data. Supply chain, sourcing and procurement teams gain significant cost savings and a boost in productivity by mastering supplier, raw materials and product information and fueling their business and analytical applications with that clean, consistent and connected information. Jakki Geiger, Senior Director of IQS Solutions Marketing at Informatica says, “Business users want more than just the underlying infrastructure to manage business-critical data about suppliers, raw materials, and products. They want to access this information directly through a business-friendly user interface. They want a business process-driven workflow to manage the full supplier lifecycle, including: supplier registration, qualification, verification, onboarding and off-boarding. Instead of IT building these business-user focused solutions on top of an MDM foundation, vendors are starting to build ready-to-use MDM solutions like the Total Supplier Relationship solution.” Read more about Valspar’s raw materials spend management use case.

5. Increased adoption of matching and linking capabilities on Hadoop 
“Many of our customers have significantly increased the amount of data they want to master,” says Dennis Moore. Days when tens of millions of master records were a lot are long gone and having hundreds of millions of master records and billions of source records is becoming almost common. An increasing number of master data sources –internal and external to organization – are contributing significantly to the rise in data volumes. To accommodate these increasing volumes, Dennis predicts that large enterprises will look at running complex matching and linking capabilities on Hadoop – a cost effective and flexible way to analyze large amount of data.

6. Master insight management is going to be next big step
“MDM will evolve into master insight management as organizations try to relate trusted data they created in MDM with transactional and social interaction data,” said Rob Karel – VP of Product Strategy and Product Marketing, IQS, Informatica. “The innovations in machine and deep learning techniques will help organizations such as healthcare prescribe next best treatment based on history of patients, retailers suggest best offers based on customer interest and behavior, public sector companies will see big steps in social services, etc.”

Rob sees MDM at the heart of this innovation bringing together relevant information about multiple master entities and acting as a core system for insight management innovations.

7. MDM and Data Governance
Aaron Zornes – Chief research officer at the MDM Institute predicts that in 2014-15, vendor MDM solutions will move from “passive-aggressive” mode to “proactive” data governance mode. Data governance for MDM will move beyond simple stewardship to convergence of task management, workflow, policy management and enforcement according to Aaron.

8. The market will solidify for cloud based MDM adoption
Aaron says – “Cloud-innate services for DQ and DG will be more prevalent; however, enterprise MDM will remain on premise with increasing integration to cloud applications in 2015.

Naveen sees lot of synergy around cloud based MDM offerings and says – “The market is solidifying for MDM on cloud but the flood gates are yet to open”.  Naveen does not see any reason why MDM market will not go to cloud and gives the example of CRM which was at similar junction before Saleforce came into play. Naveen sees similar shift for MDM and says – “The fears companies have about their data security on cloud is eventually going to fade. If you look closely at any of the recent breaches, these all involved hacks into company networks and not into cloud provider networks. The fact that cloud service providers spend more dollars on data security than any one company can spend on their on-premise security layer will be a major factor affecting the transition”. Naveen sees that big players in MDM will include cloud offerings as part of their toolkit in coming years.

Ravi also predicts an increase in cloud adoption for MDM in future as the concern for placing master data in the cloud becomes less with maximum security provided by cloud vendors.

So, what do you predict? I would love to hear your opinions and comments.

~Prash
@MDMGeek
www.mdmgeek.com

Share
Posted in Big Data, Cloud, Data Governance, Data Quality, Enterprise Data Management, Master Data Management, Product Information Management | Tagged , , , , , | Leave a comment

The 3 Little Architects and the Big Bad Mr. Wolf – A Data Parody for today’s Financial Industry

The 3 Little Architects and the Big Bad Wolf

The 3 Little Architects

Once upon a time, there were 3 Information Architects working in the financial services industry, each with different firms and backgrounds but all responsible for recommending the right technology solutions to help their firms comply with industry regulations including ongoing bank stress testing across the globe.  Since 2008, bank regulators have been focused on measuring systemic risk and requiring banks to provide transparency into how risk is measured and reported to support their capital adequacy needs.

The first architect grew through the ranks starting as a Database Administrator, a black belt in SQL and COBOL programming. Hand coding was their DNA for many years and thought of as the best approach given how customized their business and systems were vs. other organizations. As such, Architect #1 and their team went down the path of building their data management capabilities through custom hand coded scripts, manual data extractions and transformations, and dealing with data quality issues through the business organizations after the data is delivered.   Though their approach and decisions delivered on their short term needs, the firm realized the overhead required to make changes and respond to new requests driven by new industry regulations and changing market conditions.

The second architect is a “gadget guy” at heart who grew up using off the shelf tools vs. hand coding for managing data. He and his team decides not to hand code their data management processes, instead adopt and built their solution leveraging best of breed tools, some of which were open source, others from existing solutions the company had from previous projects for data integration, data quality, and metadata management.  Though their tools helped automate much of the “heavy lifting” he and is IT team were still responsible for integrating these point solutions to work together which required ongoing support and change management.

The last architect is as technically competent as his peers however understood the value of building something once to use across the business. His approach was a little different than the first two. Understanding the risks and costs of hand coding or using one off tools to do the work, he decided to adopt an integrated platform designed to handle the complexities, sources, and volumes of data required by the business.  The platform also incorporated shared metadata, reusable data transformation rules and mappings, a single source of required master and reference data, and provided agile development capabilities to reduce the cost of implementation and ongoing change management. Though this approach was more expensive to implement, the long term cost benefit and performance benefits made the decision a “no brainer’.

Lurking in the woods is Mr. Wolf. Mr. Wolf is not your typical antagonist however is a regulatory auditor whose responsibility is to ensure these banks can explain how risk is calculated as reported to the regulatory authorities. His job isn’t to shut these banks down, instead making sure the financial industry is able to measure risk across the enterprise, explain how risk is measured, and ensure these firms are adequately capitalized as mandated by new and existing industry regulations.

Mr. Wolf visits the first bank for an annual stress test audit. Looking at the result of their stress test, he asks the compliance teams to explain how their data was produced, transformed, calculated, to support the risk measurements they reported as part of the audit. Unfortunately, due to the first architect’s recommendations of hand coding their data management processes, IT failed to provide explanations and documentation on what they did, they found the developers that created their systems were no longer with the firm. As a result, the bank failed miserably, resulting in stiff penalties and higher audit costs.

Next, Architect #2’s bank was next. Having heard of what happened to their peer in the news, the architect and IT teams were confident that they were in good shape to pass their stress test audit. After digging into the risk reports, Mr. Wolf questioned the validity of the data used to calculate Value at Risk (VaR). Unfortunately, the tools that were adopted were never designed nor guaranteed by the vendors to work with each other resulting in invalid data mapping and data quality rules and gaps within their technical metadata documentation. As a result, bank #2 also failed their audit and found themselves with a ton of on one-off tools that helped automate their data management processes but lacked the integration and sharing of rules and metadata to satisfy the regulator’s demand for risk transparency.

Finally, Mr. Wolf investigated Architect #3’s firm. Having seen the result of the first two banks, Mr. Wolf was leery of their ability to pass their stress test audits. Similar demands were presented by Mr. Wolf however this time, Bank #3 provided detailed and comprehensive metadata documentation of their risk data measurements, descriptions of the data used in each report, an comprehensive report of each data quality rule used to cleanse their data, and detailed information on each counterparty and legal entity used to calculate VaR.  Unable to find gaps in their audit, Mr. Wolf, expecting to “blow” the house down, delivered a passing grade for Bank 3 and their management team due to the right investments they made to support their enterprise risk data management needs.

The moral of this story, similar to the familiar one involving the three little pigs is about the importance of having a solid foundation to weather market and regulatory storms or the violent bellow of a big bad wolf.  A foundation that includes the required data integration, data quality, master data management, and metadata management needs but also supports collaboration and visibility of how data is produced, used, and performing across the business. Ensuring current and future compliance in today’s financial services industry requires firms to have a solid data management platform, one that is intelligent, comprehensive, and allows Information Architects to help mitigate the risks and costs of hand coding or using point tools to get by only in the short term.

Are you prepared to meet Mr. Wolf?

Share
Posted in Architects, Banking & Capital Markets, Data Governance, Data Integration, Data Integration Platform, Data Quality, Enterprise Data Management | Tagged , , , , | Leave a comment

Achieving Great Data in the Oil and Gas Industry

Have you noticed something different this winter season that most people are cheery about?  I’ll give you a hint. It’s not the great sales going on at your local shopping mall but something that helps you get to the mall allot more affordable then last year. It’s the extremely low gas prices across the globe, fueled by over-supply of oil vs. demand contributed from a boom in Geo-politics and boom in shale oil production in N. America and abroad. Like any other commodity, it’s impossible to predict where oil prices are headed however, one thing is sure that Oil and Gas companies will need timely and quality data as firms are investing in new technologies to become more agile, innovative, efficient, and competitive as reported by a recent IDC Energy Insights Predictions report for 2015.

The report predicts:

  1. 80% of the top O&G companies will reengineer processes and systems to optimize logistics, hedge risk and efficiently and safely deliver crude, LNG, and refined products by the end of 2017.
  2. Over the next 3 years, 40% of O&G majors and all software divisions of oilfield services (OFS) will co-innovate on domain specific technical projects with IT professional service firms.
  3. The CEO will expect immediate and accurate information about top Shale Plays to be available by the end of 2015 to improve asset value by 30%.
  4. By 2016, 70% percent of O&G companies will have invested in programs to evolve the IT environment to a third platform driven architecture to support agility and readily adapt to change.
  5. With continued labor shortages and over 1/3 of the O&G workforce under 45 in three years, O&G companies will turn to IT to meet productivity goals.
  6. By the end of 2017, 100% of the top 25 O&G companies will apply modeling and simulation tools and services to optimize oil field development programs and 25% will require these tools.
  7. Spending on connectivity related technologies will increase by 30% between 2014 and 2016, as O&G companies demand vendors provide the right balance of connectivity for a more complex set of data sources.
  8. In 2015, mergers, acquisitions and divestitures, plus new integrated capabilities, will drive 40% of O&G companies to re-evaluate their current deployments of ERP and hydrocarbon accounting.
  9. With a business case built on predictive analytics and optimization in drilling, production and asset integrity, 50% of O&G companies will have advanced analytics capabilities in place by 2016.
  10. With pressures on capital efficiency, by 2015, 25% of the Top 25 O&G companies will apply integrated planning and information to large capital projects, speeding up delivery and reducing over-budget risks by 30%.

Realizing value from these investments will also require Oil and Gas firms to modernize and improve their data management infrastructure and technologies to deliver great data whether to fuel actionable insights from Big Data technology to facilitating post-merger application consolidation and integration activities.  Great data is only achievable by Great Design supported by capable solutions designed to help access and deliver timely, trusted, and secure data to need it most.

Lack of proper data management investments and competences have long plagued the oil and gas sector with “less-than acceptable” data and higher operating costs. According to the “Upstream Data and Information Management Survey” conducted by Wipro Technologies, 56% of those surveyed felt that business users spent more than ¼ or more of their time on low value activities caused by existing data issues (e.g. accessing, cleansing, preparing data) for “high value” activities (e.g. analysis, planning, decision making).  The same survey showed the biggest data management issues were timely access to required data and data quality issues from source systems.

So what can Oil and Gas CIO’s and Enterprise Architects do to prepare for the future?  Here are some tips for consideration:

  • Look to migrate and automate legacy hand coded data transformation processes by adopting tools that can help streamline the development, testing, deployment, and maintenance of these complex tasks that help developers build, maintain, and monitor data transformation rules once and deploy them across the enterprise.
  • Simplify how data is distributed across systems with more modern architectures and solutions and avoid the cost and complexities of point to point integrations
  • Deal with and manage data quality upstream at the source and throughout the data life cycle vs. having end users fix unforeseen data quality errors manually.
  • Create a centralized source of shared business reference and master data that can manage a consistent record across heterogeneous systems such as well asset/material information (wellhead, field, pump, valve, etc.), employee data (drill/reservoir engineer, technician), location data (often geo-spatial), and accounting data (for financial roll-ups of cost, production data).
  • Establish standards and repeatable best practices by adopting an Integration Competency Center frame work to support the integration and sharing of data between operational and analytical systems.

In summary, low oil prices have a direct and positive impact to consumers especially during the winter season and holidays and I personally hope they continue for the unforeseeable future given that prices were double just a year ago. Unfortunately, no one can predict future energy prices however one thing is for sure, the demand for great data by Oil and Gas companies will continue to grow. As such, CIO’s and Enterprise Architects will need to consider and recognize the importance of improving their data management capabilities and technologies to ensure success in 2015. How ready are you?

Click to learn more about Informatica in today’s Energy Sector:

Share
Posted in Application Retirement, Architects, CIO, Data Integration, Data Quality, Data Synchronization, Utilities & Energy | Tagged , , , , | Leave a comment

2015 – The Year of Data Integration?

2015, the Year of Data Integration?

2015, the Year of Data Integration?

I love the data integration coverage by Loraine Lawson in IT Business Edge,  especially in this December 12th posting  that focuses on the trends that will emerge in 2015.  The best quote from the post is: “Oddly, organizations still tended to focus on point-solutions in the cloud. As more infrastructure and data moves to the cloud, they’re experiencing similar pain points and relearning old lessons.”

The articles cites some research from Ovum, that predicts many enterprises will begin moving toward data integration, driven largely by the rise of cloud computing and big data.  However, enterprises need to invest in both modernizing the existing data management infrastructure, as well as invest in data integration technology.  “All of these new investments will push the middleware software market up 9 percent to a $16.3 billion industry, Information Management reports.”  This projection is for 2015.

I suspect that’s a bit conservative.  In my travels, I see much more interest in data integration strategies, approaches, and technology, as cloud computing continues to grow, as well as enterprises understand better the strategic use of data.  So, I would put the growth at 15 percent for 2015.

There are many factors driving this growth, beyond mere interest in cloud computing and big data.

The first consideration is that data is more strategic than initially understood.  While businesses have always considered data a huge asset, it has not been until the last few years that businesses have seen the true value of understanding what’s going on inside, and outside of their business.

Manufacturing companies want to see the current state of production, as well as production history.  Management can now use that data to predict trends to address, such as future issues around employee productivity, or even a piece of equipment that is likely to fail and the impact of that failure on revenue.  Healthcare companies are learning how to better monitor patient health, such as spotting likely health problems before they are diagnosed, or leveraging large data to understand when patterns emerge around health issues, such as areas of the country that are more prone to asthma, based upon air quality.

Second, there is the need to deal with compliance issues.  The new health care regulations, or even the new regulation around managing a publically traded company, require a great deal of data management issues, including data integration.

As these laws emerge, and are altered over time, the reporting requirements are always more complex and far reaching than they were before.  Those who want to avoid fines, or even avoid stock drops around mistakes, are paying close attention to this area.

Finally, there is an expectation from customers and employees that you will have a good handle on your data.  10 years ago you could tell a customer on the phone that you needed to check different systems to answer their question.  Those days are over.  Today’s customers and employees want immediate access to the data they need, and there is no good excuse for not being able to produce that data.  If you can’t, your competition will.

The interest in data integration will experience solid growth in 2015, around cloud and big data, for sure.  However, other factors will drive this growth, and enterprises will finally understand that data integration is core to an IT strategy, and should never be an afterthought.

Share
Posted in B2B Data Exchange, Business/IT Collaboration, Data Governance, Data Integration, Data Integration Platform | Tagged , , , | Leave a comment

Business Asking IT to Install MDM: Will IDMP make it happen?

Will IDMP Increase MDM Adoption?

Will IDMP Increase MDM Adoption?

MDM for years has been a technology struggling for acceptance.  Not for any technical reason, or in fact any sound business reason.  Quite simply, in many cases the business people cannot attribute value delivery directly to MDM, so MDM projects can be rated as ‘low priority’.  Although the tide is changing, many business people still need help in drawing a direct correlation between Master Data Management as a concept and a tool, and measurable business value.  In my experience having business people actively asking for a MDM project is a rare occurrence.  This should change as the value of MDM is becoming clearer, it is certainly gaining acceptance in principle that MDM will deliver value.  Perhaps this change is not too far off – the introduction of Identification of Medicinal Products (IDMP) regulation in Europe may be a turning point.

At the DIA conference in Berlin this month, Frits Stulp of Mesa Arch Consulting suggested that IDMP could get the business asking for MDM.  After looking at the requirements for IDMP compliance for approximately a year, his conclusion from a business point of view is that MDM has a key role to play in IDMP compliance.  A recent press release by Andrew Marr, an IDMP and XEVMPD expert and  specialist consultant, also shows support for MDM being ‘an advantageous thing to do’  for IDMP compliance.  A previous blog outlined my thoughts on why MDM can turn regulatory compliance into an opportunity, instead of a cost.  It seems that others are now seeing this opportunity too.

So why will IDMP enable the business (primarily regulatory affairs) to come to the conclusion that they need MDM?  At its heart, IDMP is a pharmacovigilance initiative which has a goal to uniquely identify all medicines globally, and have rapid access to the details of the medicine’s attributes.  If implemented in its ideal state, IDMP will deliver a single, accurate and trusted version of a medicinal product which can be used for multiple analytical and procedural purposes.  This is exactly what MDM is designed to do.

Here is a summary of the key reasons why an MDM-based approach to IDMP is such a good fit.

1.  IDMP is a data Consolidation effort; MDM enables data discovery & consolidation

  • IDMP will probably need to populate between 150 to 300 attributes per medicine
  • These attributes will be held in 10 to 13 systems, per product.
  • MDM (especially with close coupling to Data Integration) can easily discover and collect this data.

2.  IDMP requires cross-referencing; MDM has cross-referencing and cleansing as key process steps.          

  • Consolidating data from multiple systems normally means dealing with multiple identifiers per product.
  • Different entities must be linked to each other to build relationships within the IDMP model.
  • MDM allows for complex models catering for multiple identifiers and relationships between entities.

3.  IDMP submissions must ensure the correct value of an attribute is submitted; MDM has strong capabilities to resolve different attribute values.

  • Many attributes will exist in more than one of the 10 to 13 source systems
  • Without strong data governance, these values can (and probably will be) different.
  • MDM can set rules for determining the ‘golden source’ for each attribute, and then track the history of these values used for submission.

4.  IDMP is a translation effort; MDM is designed to translate

  • Submission will need to be within a defined vocabulary or set of reference data
  • Different regulators may opt for different vocabularies, in addition to the internal set of reference data.
  • MDM can hold multiple values/vocabularies for entities, depending on context.

5.  IDMP is a large co-ordination effort; MDM enables governance and is generally associated with higher data consistency and quality throughout an organisation.

  • The IDMP scope is broad, so attributes required by IDMP may also be required for compliance to other regulations.
  • Accurate compliance needs tracking and distribution of attribute values.  Attribute values submitted for IDMP, other regulations, and supporting internal business should be the same.
  • Not only is MDM designed to collect and cleanse data, it is equally comfortable for data dispersion and co-ordination of values across systems.

 Once business users assess the data management requirements, and consider the breadth of the IDMP scope, it is no surprise that some of them could be asking for a MDM solution.  Even if they do not use the acronym ‘MDM’ they could actually be asking for MDM by capabilities rather than name.

Given the good technical fit of a MDM approach to IDMP compliance, I would like to put forward three arguments as to why the approach makes sense.  There may be others, but these are the ones I feel are most compelling:

1.  Better chance to meet tight submission time

There is slightly over 18 months left before the EMA requires IDMP compliance.  Waiting for final guidance will not provide enough time for compliance.  Using MDM you have a tool to begin with the most time consuming tasks:  data discovery, collection and consolidation.  Required XEVMPD data, and the draft guidance can serve as a guide as to where to focus your efforts.

2.  Reduce Risk of non-compliance

With fines in Europe of ‘fines up to 5% of revenue’ at stake, risking non-compliance could be expensive.  Not only will MDM increase your chance of compliance on July 1, 2016, but will give you a tool to manage your data to ensure ongoing compliance in terms of meeting deadlines for delivering new data, and data changes.

3.  Your company will have a ready source of clean, multi-purpose product data

Unlike some Regulatory Information Management tools, MDM is not a single-purpose tool.  It is specifically designed to provide consolidated, high-quality master data to multiple systems and business processes.  This data source could be used to deliver high-quality data to multiple other initiatives, in particular compliance to other regulations, and projects addressing topics such as Traceability, Health Economics & Outcomes, Continuous Process Verification, Inventory Reduction.

So back to the original question – will the introduction of IDMP regulation in Europe result in the business asking IT to implement MDM?  Perhaps they will, but not by name.  It is still possible that they won’t.  However, for those of you who have been struggling to get buy-in to MDM within your organisation, and you need to comply to IDMP, then you may be able to find some more allies (potentially with an approved budget) to support you in your MDM efforts.

Share
Posted in Data Governance, Healthcare, Master Data Management | Tagged , | Leave a comment