Category Archives: Master Data Management

Well Answered.

data

Well Data for Upstream Oil & Gas Companies

Seven steps to clean, trusted and aligned well data.

Is your data ready to help you make the big decisions? “Data and analytics inputs” (37%) just nudged out “my own experience and intuition” (33%) when energy executives were asked as part of PWC’s 2014 Big Decisions survey on what they relied most for the last big decision they made.

But, what about the little decisions? The ones that add up and either create value or contribute to waste, primarily in the area of operational efficiency? It’s important to think about this aspect of decision making too.

In a recent article by Forbes magazine, Tom Morgan, Analyst and Corporate Counsel at Drillinginfo, credits the present stamina of the unconventional plays to efficiency gains. These gains have been as much as 25% and are altering the profitability thresholds for productive wells, changing many of the beliefs on which today’s headlines are focused.

A well’s profitability is driven by multiple variables, including well location, lease terms, number of wells drilled, uptime, partners, resources, and so on. These attributes are monitored and combined to ensure a well reaches its full productive potential. But how do you know if the information you’re depending on is ready to provide the answers you need when you need them?

Given that an unconventional well’s productivity declines after the first 2 years, every decision made in this short timeframe puts the well’s profitability at risk. Approaching bad or incomplete data as a ‘cost of doing business’ can mean millions of dollars in unnecessary waste – per well.

The best strategic and operational decisions stem from clean, trusted, and aligned data. Kudos to the upstream, unconventional teams that are leading the way to exploiting how data is used in the oil & gas industry. The most effective way to ensure your data is decision ready is to adopt these seven steps:

  1. Integrate well information. Bring together fragmented data you expect to be consistent across systems but isn’t.
  2. Evaluate the quality of data. Make sure the information is accurate and complete. If it isn’t, you can see what needs to be fixed.
  3. De-duplicate it. Automatically identify duplicate records and reconcile them into a single well profile.
  4. Enrich it. Enhance well profiles with 3rd party data such as state/local and IHS data.
  5. Validate it. Ensure you can identify the correct well when you need to. (It’s harder than it sounds.)
  6. Relate it. See relationships between wells, assets, associates, suppliers and partners, and the projects they’re associated with.
  7. Deliver it. Fuel business and analytical applications with clean, consistent and connected well information.

As O&G companies look to save on IT costs, they need to look to areas that add value or decrease other costs over the long term. Chris Niven, IDC analyst, sums it up best: “Oil and gas companies need to spend their investments wisely to help them become agile and operationally efficient.”

On Tuesday, May 12, during InformaticaWorld 2015, Devon Energy, a Fortune 500 company, will join other industry innovators during MDM Day. Devon will share how they’re using data virtualization to streamline their data processes and gain efficiencies and insights in the process. Be sure to register here.

Share
Posted in Informatica World 2015, Master Data Management | Tagged , | Leave a comment

Does a Better Order to Cash Process Start with Better Customer Data?

Improving Ordeshutterstock_227713873 - Copyr to Cash matters regardless of market cycle

Order to Cash (OTC) matters to today’s CFOs and their financial teams even as CFOs move themselves from an expense and cost reduction footing to a manage growth footing. As the business process concerned with receiving and processing customer sales, having a well-functioning OTC process is not only about preserving cash, but it is also about improving the working capital delivered to the bottom line. This is something which is necessitated by successful growth strategies. Specifically, OTC helps to provide the cash flow needed to quicken collections and improve working capital turnover.

This drives concrete improvements to finance metrics including Days Sales Outstanding (a measure of the average number of days that a company takes to collect revenue after a sale has been made) and the overall cost of collections. It should be clear that a poorly running order to cash process can create tangible business issues. These include but are not limited to the following:

  • Accuracy of purchase orders
  • Accuracy of invoices
  • Volume of customer disputes
  • Incorrect application of payment terms
  • Approval of inappropriate orders
  • Errors in order fulfillment

CFOs tell us that it is critical that they make sure that “good cash management is occurring in compliance with regulation”. It is important as well to recognize that OTC cuts across many of the primary activities of the business value chain—especially those that related to sales, marketing, and services.

How do you improve your order to cash process?

So how do financial leader go about improving OTC? They can clearly start by looking at the entire OTC process from quote order, process order, fulfill order, invoice customer, receive and apply cash, and manage credit and collections. The below diagram shows the specific touch points where the process can be improved—each of these should be looked at for process or technology improvement.

customer data OTC Process

However, the starting point is where the most concrete action can be taken. Fixing customer data fixes the data that is used by each succeeding process improvement area. This is where a single, connected view of customer can be established. This improves the OTC process by doing the following:

  1. Fixes your customer data
  2. Establishes the right relationships between customers
  3. Establishes tax and statutory registrations and credit limits
  4. Prevents penalties for delivering tax documents to the wrong placeCustomer Data Mastering (CDM) does this in the following way. It provides a single view of customers, 360 degree view of relationships as well as a complete view of integrative customer relationships including interactions and transactions.

CDM matters to the CFO and the Business as a whole

customer dataIt turns out that CDM does not just matter to OTC and the finance organization. It matters as well to the corporate value chain by impacting the following primary activities including outbound logics, marketing and sales, and service. Specifically, CDM accomplishes the following:

  • It reduces costs by reducing invoicing and billing inaccuracies, customer disputes, mailing unconsolidated statements, sending duplicate mail, and dealing with returned mail
  • Increases revenue by boosting marketing effectiveness at segmenting customer for more personalized offers
  • Increases revenue by boosting sales effectiveness by making more relevant cross-sell and up-sell offers
  • Reduces costs by boosting call center effectiveness by resolving customer issues more quickly
  • Improves customer satisfaction, customer loyalty and retention because employees are empowered with complete customer information to deliver great customer experiences across channels and touch points

Parting remarks

So as we have discussed, today’s CFOs are finding that they need to become more and more growth oriented. This transition is made more effective with a well function OTC process. While OTC impacts other elements of the business too, the starting point for fixing OTC is customer data mastering because it fixes elements of the data portion of this process.

Solution Brief:

Master Data Management

Related Blogs

CFOs Move to Chief Profitability Officer

CFOs Discuss Their Technology Priorities

The CFO Viewpoint upon Data

How CFOs can change the conversation with their CIO?

New type of CFO represents a potent CIO ally

Competing on Analytics

The Business Case for Better Data Connectivity

Twitter@MylesSuer

Share
Posted in Business Impact / Benefits, Master Data Management | Tagged , | Leave a comment

5 Best Practices for Effective Supplier Information Management

This article was originally posted in Supply and Demand Chain Executive.

Supplier Information Management

Supplier Information Management

Supplier data management is costing organizations millions of dollars each year. According to AMR Research/Gartner, supplier management organizations have increased their employee headcount and system resources by 35%, and are spending up to $1,000 per supplier annually, to manage their supplier information across the enterprise.

Why is clean, consistent and connected supplier information important for effective supplier management?

In larger organizations, supplier information is typically fragmented across departmental, line of business and/or regional applications. In most cases, this means it’s inaccurate, inconsistent, incomplete and disconnected across the different siloed systems. Adding, changing or correcting information in one system doesn’t automatically update it in other systems. As a result, supply chain, sourcing and buying teams struggle to get access to a single view of the supplier so they can understand the total supplier relationship across the business. As a result, it’s difficult to achieve their goals, such as:

  • Quickly and accurately assessing supplier risk management and compliance
  • Accelerating supplier onboarding to get products to market quickly
  • Improving supplier collaboration or supplier relationship management
  • Quickly and accurately evaluating supplier spend management
  • Monitoring supplier performance management

To effectively manage global supply chain, sourcing and procurement activities, organizations must be able to quickly and easily answer questions about their suppliers’ or vendors’ companies, contacts, raw materials or products and performance.

Quite often, organizations have separate procurement teams in different regions and they have their own regional applications. As a result, the same supplier may exist several times in the company’s supplier management system, but captured with different company names. The buying teams don’t have a single trusted view of their global supplier information. So, the different teams would purchase one product from the same supplier – each of them with different pricing and payment terms – without even knowing about it. They don’t have a clear understanding of the total relationship with that vendor and cannot benefit from negotiated corporate discounts.

If the data that’s fueling operational and analytical systems isn’t clean, consistent and connected, they cannot quickly and easily access the information they need to answer these questions and manage their supply chain, sourcing or procurement operations efficiently or effectively. And this results in ineffectiveness, missed procurement opportunities and high administration costs.

For example, a fashion retailer needs to ensure that all compliance documents of its global vendors are current and complete. If the documents are incomplete or expired, the retailer could face serious problems and might risk penalties, safety issues and costs related to its supplier incompliance.

A Business Value Assessment recently conducted by Informatica among companies across all major industries, business models and sizes, revealed that by leveraging trusted data quality, annual supplier spend could be reduced by an average of $6M. Breaking this down by industry, the possible annual reduction in supplier spending, thanks to improved supplier information and data quality, could be $9.02M for Consumer Packaged Goods (CPG), $4.2M for retail and $2.76M for industrial manufacturing companies.

To reduce costs related to poor supplier information quality, these are the five best practices that will help your company achieve a more effective supplier relationship management:

  1. Make it strategic
    Get senior management buy-in and stakeholder support. Make the business case and get the time, money and resources you’ll need.
  2. Leave your data where it is
    Effective supplier relationship management lets you leave your data where it naturally lives: in the apps and data stores your business users depend on. You just need to identify these places, so you can access the data and share clean, consistent and connected supplier data back.
  3. Apply Data Quality at the application level
    It’s far better to clean your data at the source, before trying to combine it with other data.
    Apply standards and practices at the application level, to ensure you’re working with the most accurate and complete data, and your mastering job will be much, much easier and deliver far better results.
  4. Use specialized Master Data Management and Data Integration platforms
    It takes specialized technology that’s optimized for collecting, reconciling, managing, and linking diverse data sets (as well as resolving duplicates and managing hierarchies) to achieve total supplier information management. Your program is going to have to relate to the supplier domain, as well as to other, equally important data types. Attempting this with homemade integration tools or point-to-point integrations is a major mistake. This wheel has already been invented.
  5. Share clean data with key supplier apps on an ongoing basis
    It’s no good having clean, consistent and connected supplier data if you can’t deploy it to the point of use: to the applications and analytics teams and tools that can turn it into insight (and money)—including your enterprise data warehouse, where spend analytics happen.

These best practices may seem basic or obvious, but failing to apply them is the main reason global supplier data programs stumble. They will help fuel operational and analytical applications with clean, consistent and connected supplier information for a more accurate view of suppliers’ performance, compliance and risk. As a result, supply chain, sourcing and buying teams will be empowered to cut costs by negotiating more favorable pricing and payment terms and streamlining their processes.

Related blog:

At Valspar Data Management is Key to Controlling Purchasing Costs

Share
Posted in DaaS, Data Quality, Master Data Management, PiM | Tagged , , , | Leave a comment

What are Incorrect Addresses Costing your Company?

I live in a small town in Maine. Between my town and the surrounding three towns, there are seven Main Streets and three Annis Roads or Lanes (and don’t get me started on the number of Moose Trails). If your insurance company wants to market to or communicate with someone in my town or one of the surrounding towns, how can you ensure that the address that you are sending material to is correct? What is the cost if material is sent to an incorrect or outdated address? What is the cost to your insurance company if a provider sends the bill out to the wrong ?

How much is poor address quality costing your business? It doesn’t just impact marketing where inaccurate address data translates into missed opportunity – it also means significant waste in materials, labor, time and postage . Bills may be delivered late or returned with sender unknown, meaning additional handling times, possible repackaging, additional postage costs (Address Correction Penalties) and the risk of customer service issues. When mail or packages don’t arrive, pressure on your customer support team can increase and your company’s reputation can be negatively impacted. Bills and payments may arrive late or not at all directly impacting your cash flow. The cost of bad address data causes inefficiencies and raises costs across your entire organization.

The best method for handling address correction is through a validation and correction process:

Address+Doctor

What are Incorrect Addresses Costing your Company? There are Steps to Follow

When trying to standardize member or provider information one of the first places to look is address data. If you can determine that John Q Smith that lives at 134 Main St in Northport, Maine 04843 is the same John Q Smith that lives at 134 Maine Street in Lincolnville, Maine 04849, you have provided a link between two members that are probably considered distinct in your systems. Once you can validate that there is no 134 Main St in Northport according to the postal service, and then can validate that 04849 is a valid zip code for Lincolnville – you can then standardize your address format to something along the lines of: 134 MAIN ST LINCOLNVILLE,ME 04849. Now you have a consistent layout for all of your addresses that follows postal service standards. Each member now has a consistent address which is going to make the next step of creating a golden record for each member that much simpler.

Think about your current method of managing addresses. Likely, there are several different systems that capture addresses with different standards for what data is allowed into each field – and quite possibly these independent applications are not checking or validating against country postal standards.  By improving the quality of address data, you are one step closer to creating high quality data that can provide the up-to-the minute accurate reporting your organization needs to succeed.

Share
Posted in 5 Sales Plays, B2B, Big Data, Business Impact / Benefits, Customer Acquisition & Retention, Customer Services, Customers, Data Quality, Data Synchronization, Healthcare, Master Data Management, Total Customer Relationship | Tagged , , , , | Leave a comment

Competing on Customer Experience

customers

Multi Channel Operations

I recently got to meet with a very enlightened insurance company which was actively turning their SWOTT analysis (with the second T being trends) into concrete action. They shared with me that they view their go forward “right to win” being determined by the quality of customer experience they deliver to customers through their traditional channels and increasingly through “digital channels”. One marketing leader joked early on that “it’s no longer about the money; it is about the experience”. The marketing and business leaders that I met with made it extremely clear that they have a sense of urgency to respond to what it saw as significant market changes on the horizon. What this company wanted to achieve was a single view of customer across each of its distribution channel as well as their agent population. Typical of many businesses today, they had determined that they needed an automated, holistic view into things like its customer history. Smartly, this business wanted to put together its existing customer data with its customer leads.

Using Data to Accelerate the Percentage Customers that are Cross Sold

Customers that are Cross Sold

Using Data to Accelerate the Percentage Customers that are Cross Sold

Taking this step was seen as allowing them to understand when an existing customer is also a lead for another product. With this knowledge, they wanted to provide them with special offers to accelerate their conversion from lead to being a customer with more than one product. What they wanted to do here reminded me of the capabilities of 58.com, eBay, and other Internet pure plays. The reason for doing this well was described recently by Gartner. Gartner suggests that increasing business success is determined by what they call “business moments”. Without a first rate experience that builds upon what this insurance company already knows about its customers, this insurance company worries it could be increasing at risk by Internet pure plays. As important, like many businesses, the degree of cross sell is for many businesses a major determinant of whether a customer is profitable or not.

Getting Customer Data Right is Key to Developing a Winning Digital Experience

customer data

Getting Customer Data Right is Key to Developing a Winning Digital Experience

To drive a first rate digital experience, this insurance company wanted to apply advanced analytics to a single view of customer and prospect data. This would allow them to do things like conduct nearest neighbor predictive analysis and modeling. In this form of analysis, “the goal is to predict whether a new customer will respond to an offer based on how other similar customers have responded” (Data Science for Business, Foster Provost, O’Reilly, 2013, page 147).

What has limiting this business like so many others is that their customer data is scattered across many enterprise systems. For just for one division, they have more than one Salesforce instance. Yet this company’s marketing team knew to keep its customers, it needed to be able to service them omnichannel and establish a single unified customer experience. To make this happen, they needed to for the first to share holistic customer information across their ecosystems. At the same time, they knew that they would needed to protect their customer’s privacy—i.e. only certain people would be  able to see certain information. They wanted by role that the ability to selective mask data and protect their customer in particular consumers by only allowing certain users in defense parlance, with a need to know, to see a subset of the holistic set of information collected. When asked about the need for a single view of customer, the digital marketing folks openly shared that they perceived the potential for external digital market entrants—ala Porter’s five forces of competition. This firm saw them either as taking market share from them or effectively disintermediating them over time them from their customers as more and more customers move their insurance purchasing of Insurance to the Web. Given the risk, their competitive advantage needed to move to knowing better their customer and being able to respond better to them on the web. This clearly included new customers that are trying to win in the language of Theodore Levitt.

Competing on Customer Experience

In sum, this insurance company smartly felt that they needed to compete on customer experience to pull out a new phrase for me and this required superior knowledge of existing and new customers. This means they needed as complete and correct view of customers as possible including addresses, connection preferences, and increasingly social media responses. This means competitively responding directly to those that have honed their skills in web design, social presence, and advanced analytics. To do this, they will create predictive capabilities that will make use of their superior customer data. Clearly, without this prescience of thinking, this moment will not be like the strategic collision of  Starbucks and Fast Food Vendors where the desire to grow forced competition between the existing player and new entrants wanting to claim a portion of the existing market player’s business.

Related Blogs

Solution Page:

Marketing

Solution Page:

Total Customer Relationship

Blogs and Articles

Major Financial Services Institution Uses Technology to Improve Uour Teller Experience

Twitter: @MylesSuer

Share
Posted in 5 Sales Plays, Big Data, CMO, Master Data Management | Tagged , , | 1 Comment

Retail Business Technology Expo Trend Blog – The Fantastic Five

Retail friends – sorry to say it was not a surprise that reinventing the store and making it more digital impacted the Retail Business Technology Expo (RBTE) in London this week. I saw a similar trend at the National Retail Federation Big Show back in January, which I discussed in this blog post.

With that being said, I was not shy finding the fantastic five that thrilled me in the Olympia Hammersmith center hall.

Here they are:

Engaging Spaces: Their booth was making the most noise with interactive touchable wooden walls, which emphasize interaction with sound and lights. No booth was inspiring more people taking pictures than this one. I took the liberty to record this short clip

Engaging Spaces was surrounded by lots of fancy digital signage vendors to display products in-store. Some demos did not work, or did not come with comprehensive product details and are still not personalized.

Panel: Optimizing the supply chain and omnichannel experience are twins. Moderated by Spencer Izard and completed by Craig Sears-Black from Manhattan Associates and Tom Enright from Gartner, showed that the lines between retailers and CPG companies are blurring. Retailers become eTailers and brands act like retailers.

We learned that consumers don’t care where they buy from, but they always expect trust! The experts see co-existence, overlap and changes for partnering between vendors and retailers. Analysts said that retail organizations are still siloed on the internal structure, which prevents omnichannel execution. We expect that a balance of power will take place between brands and retailers.

panel

Orderella: Let the phone do the queuing. This app is perfect for people like me who hate waiting in line for lunch.. The app connects with PayPal, soI was able to order my snack and drink from my phone, and to my table. It was delivered in 1 minute, and I was able to monitor the process within the app. In addition, they also delivered to each both with localizing your phone and offered a 6 bucks voucher for each new deal. Great combination of location, real-time, product and customer data.

orderella

Red Ant: The seamless in-store experience. The app sits on top of ecommerce tools like Demandware, hybris, Intershop, Magento, Oxid, Oracle ATG or IBM WebSphere Commerce, which are used by many of our customers tosupport barcode scanning and flexibility in the checkout process. It also supports the in-store assistant to complete the transaction. Red Ant is very easy to use for our eCommerce clients, who already fuel their commerce with perfect product information.

IMG_6011

Iconeme: Again for digital in-store experience. The app uses iBeacon to help users see where the product is in the store, share it, view looks (product bundles), a virtual dressing room, and of course, check out payment. Definitely something to take a look at.

in store app

Share
Posted in B2B, Customers, Data Governance, Master Data Management, PiM, Product Information Management, Real-Time, Retail | Tagged | Leave a comment

Analytics Stories: A case study from UPMC

UPMC

As I have shared within the posts of this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. Like healthcare institutions across the country, UPMC is striving to improve its quality of care and business profitability. One educational healthcare CEO put it to me this way–“if we can improve our quality of service, we can reduce costs while we increase our pricing power”. In UPMC’s case, they believe that the vast majority of their costs are in a fraction of their patients, but they want to prove this with real data and then use this information drive their go forward business strategies.

Getting more predictive to improved outcomes and reduce cost

AnalyticsArmed with this knowledge, UPMC’s leadership wanted to use advanced analytic and predictive modeling to improve clinical and financial decision making. And taking this action was seen as producing better patient outcomes and reducing costs. A focus area for analysis involved creating “longitudinal records” for the complete cost of providing particular types of care. For those that aren’t versed in time series analysis, longitudinal analysis uses a series of observations obtained from many respondents over time to derive a relevant business insight. When I was also involved in healthcare, I used this type of analysis to interrelate employee and patient engagement results versus healthcare outcomes. In UPMC’s case, they wanted to use this type of analysis to understand for example the total end to end cost of a spinal surgery. UPMC wanted to look beyond the cost of surgery and account for the pre-surgery care and recovery-related costs. However, to do this for the entire hospital meant that it needed to bring together data from hundreds of sources across UPMC and outside entities, including labs and pharmacies. However, by having this information, UPMC’s leadership saw the potential to create an accurate and comprehensive view which could be used to benchmark future procedures. Additionally, UPMC saw the potential to automate the creation of patient problem lists or examine clinical practice variations. But like the other case studies that we have reviewed, these steps required trustworthy and authoritative data to be accessed with agility and ease.

UPMC’s starts with a large, multiyear investment

In October 2012, UPMC made a $100 million investment to establish an enterprise analytics initiative to bring together for the first time, clinical, financial, administrative, genomic and other information together in one place. Tom Davenport, the author of Competing on Analytics, suggests in his writing that establishing an enterprise analytics capability represents a major step forward because it allows enterprises to answer the big questions, to better tie strategy and analytics, and to finally rationalize applications interconnect and business intelligence spending. As UPMC put its plan together, it realized that it needed to impact more than 1200 applications. As well it realized that it needed one system manage with data integration, master data management, and eventually complex event processing capabilities. At the same time, it created the people side of things by creating a governance team to manage data integrity improvements, ensuring that trusted data populates enterprise analytics and provides transparency into data integrity challenges. One of UPMC’s goals was to provide self-service capabilities. According to Terri Mikol, a project leader, “We can’t have people coming to IT for every information request. We’re never going to cure cancer that way.” Here is an example of the promise that occurred within the first eight months of this project. Researchers were able to integrate—for the first time ever– clinical and genomic information on 140 patients previously treated for breast cancer. Traditionally, these data have resided in separate information systems, making it difficult—if not impossible—to integrate and analyze dozens of variables. The researchers found intriguing molecular differences in the makeup of pre-menopausal vs. post-menopausal breast cancer, findings which will be further explored. For UPMC, this initial cancer insight is just the starting point of their efforts to mine massive amounts of data in the pursuit of smarter medicines.

Building the UPMC Enterprise Analytics Capability

To create their enterprise analytics platform, UPMC determined it was critical to establish “a single, unified platform for data integration, data governance, and master data management,” according to Terri Mikol. The solution required a number of key building blocks. The first was data integration to collect and cleanses data from hundreds of sources and organizes them into repositories that would enable fast, easy analysis and reporting by and for end users.

Specifically, the UPMC enterprise analytics capability pulls clinical and operational data from a broad range of sources, including systems for managing hospital admissions, emergency room operations, patient claims, health plans, electronic health records, as well as external databases that hold registries of genomic and epidemiological data needed for crafting personalized and translational medicine therapies. UPMC has integrated quality checked source data in accordance with industry-standard healthcare information models. This effort included putting together capabilities around data integration, data quality and master data management to manage transformations and enforce consistent definitions of patients, providers, facilities and medical terminology.

As said, the cleansed and harmonized data is organized into specialized genomics databases, multidimensional warehouses, and data marts. The approach makes use of traditional data warehousing approaches as well as big data capabilities to handle unstructured data and natural language processing. UPMC has also deployed analytical tools that allow end users to exploit the data enabled from the Enterprise Analytics platform. The tools drive everything from predictive analytics, cohort tracking, and business and compliance reporting. And UPMC did not stop here. If their data had value then it needed to be secured. UPMC created data audits and data governance practices. As well, they implemented a dynamic data masking solution ensures data security and privacy.

Parting Remarks

As I have discussed, many firms are pushing point silo solutions into their environments, but as UPMC shows this limits their ability to ask the bigger business questions or in UPMC’s case to discover things that can change people’s live. Analytics are more and more a business enabler if they are organized as an enterprise analytics capability. As well, I have come to believe that analytics have become foundational capability to all firms’ right to win. It informs a coherent set of capabilities and establishes a firm’s go forward right to win. For this, UPMC is a shining example of getting things right.

Related links

Detailed UPMC Case Study

Related Blogs

Analytics Stories: A Banking Case Study

Analytics Stories: A Financial Services Case Study

Analytics Stories: A Healthcare Case Study

Who Owns Enterprise Analytics and Data? Competing on Analytics: A Follow Up to

Thomas H. Davenport’s Post in HBR

Thomas Davenport Book “Competing On Analytics”

Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO, Data First, Master Data Management | Tagged , , , | Leave a comment

Building an Impactful Data Governance – One Step at a Time

Let’s face it, building a Data Governance program is no overnight task.  As one CDO puts it:  ”data governance is a marathon, not a sprint”.  Why? Because data governance is a complex business function that encompasses technology, people and process, all of which have to work together effectively to ensure the success of the initiative.  Because of the scope of the program, Data Governance often calls for participants from different business units within an organization, and it can be disruptive at first.

Why bother then?  Given that data governance is complex, disruptive, and could potentially introduce additional cost to a company?  Well, the drivers for data governance can vary for different organizations.  Let’s take a close look at some of the motivations behind data governance program.

For companies in heavily regulated industries, establishing a formal data governance program is a mandate.  When a company is not compliant, consequences can be severe. Penalties could include hefty fines, brand damage, loss in revenue, and even potential jail time for the person who is held accountable for being noncompliance. In order to meet the on-going regulatory requirements, adhere to data security policies and standards, companies need to rely on clean, connected and trusted data to enable transparency, auditability in their reporting to meet mandatory requirements and answer critical questions from auditors.  Without a dedicated data governance program in place, the compliance initiative could become an on-going nightmare for companies in the regulated industry.

A data governance program can also be established to support customer centricity initiative. To make effective cross-sells and ups-sells to your customers and grow your business,  you need clear visibility into customer purchasing behaviors across multiple shopping channels and touch points. Customer’s shopping behaviors and their attributes are captured by the data, therefore, to gain thorough understanding of your customers and boost your sales, a holistic Data Governance program is essential.

Other reasons for companies to start a data governance program include improving efficiency and reducing operational cost, supporting better analytics and driving more innovations. As long as it’s a business critical area and data is at the core of the process, and the business case is loud and sound, then there is a compelling reason for launching a data governance program.

Now that we have identified the drivers for data governance, how do we start?  This rather loaded question really gets into the details of the implementation. A few critical elements come to consideration including: identifying and establishing various task forces such as steering committee, data governance team and business sponsors; identifying roles and responsibilities for the stakeholders involved in the program; defining metrics for tracking the results.  And soon you will find that on top of everything, communications, communications and more communications is probably the most important tactic of all for driving the initial success of the program.

A rule of thumb?  Start small, take one-step at a time and focus on producing something tangible.

Sounds easy, right? Think this is easy?!Well, let’s hear what the real-world practitioners have to say. Join us at this Informatica webinar to hear Michael Wodzinski, Director of Information Architecture, Lisa Bemis, Director of Master Data, Fabian Torres, Director of Project Management from Houghton Mifflin Harcourt, global leader in publishing, as well as David Lyle, VP of product strategy from Informatica to discuss how to implement  a successful data governance practice that brings business impact to an enterprise organization.

If you are currently kicking the tires on setting up data governance practice in your organization,  I’d like to invite you to visit a member-only website dedicated to Data Governance:  http://governyourdata.com/. This site currently has over 1,000 members and is designed to foster open communications on everything data governance. There you will find conversations on best practices, methodologies, frame works, tools and metrics.  I would also encourage you to take a data governance maturity assessment to see where you currently stand on the data governance maturity curve, and compare the result against industry benchmark.  More than 200 members have taken the assessment to gain better understanding of their current data governance program,  so why not give it a shot?

Governyourdata.com

Governyourdata.com

Data Governance is a journey, likely a never-ending one.  We wish you best of the luck on this effort and a joyful ride! We love to hear your stories.

Share
Posted in Big Data, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , , , , , , , | 1 Comment

Guiding Your Way to Master Data Management Nirvana

Achieving and maintaining a single, semantically consistent version of master data is crucial for every organization. As many companies are moving from an account or product-centric approach to a customer-centric model, master data management is becoming an important part of their enterprise data management strategy. MDM provides the clean, consistent and connected information your organizations need for you to –

  1. Empower customer facing teams to capitalize on cross-sell and up-sell opportunities
  2. Create trusted information to improve employee productivity
  3. Be agile with data management so you can make confident decisions in a fast changing business landscape
  4. Improve information governance and be compliant with regulations

Master Data ManagementBut there are challenges ahead for the organizations. As Andrew White of Gartner very aptly wrote in a blog post, we are only half pregnant with Master Data Management. Andrew in his blog post talked about increasing number of inquiries he gets from organizations that are making some pretty simple mistakes in their approach to MDM without realizing the impact of those decisions on a long run.

Over last 10 years, I have seen many organizations struggle to implement MDM in a right way. Few MDM implementations have failed and many have taken more time and incurred cost before showing value.

So, what is the secret sauce?

A key factor for a successful MDM implementation lays in mapping your business objectives to features and functionalities offered by the product you are selecting. It is a phase where you ask right questions and get them answered. There are few great ways in which organizations can get this done and talking to analysts is one of them. The other option is to attend MDM focused events that allow you to talk to experts, learn from other customer’s experience and hear about best practices.

We at Informatica have been working hard to deliver you a flexible MDM platform that provides complete capabilities out of the box. But MDM journey is more than just technology and product features as we have learnt over the years. To ensure our customer success, we are sharing knowledge and best practices we have gained with hundreds of successful MDM and PIM implementations. The Informatica MDM Day, is a great opportunity for organizations where we will –

  • Share best practices and demonstrate our latest features and functionality
  • Show our product capabilities which will address your current and future master data challenges
  • Provide you opportunity to learn from other customer’s MDM and PIM journeys.
  • Share knowledge about MDM powered applications that can help you realize early benefits
  • Share our product roadmap and our vision
  • Provide you an opportunity to network with other like-minded MDM, PIM experts and practitioners

So, join us by registering today for our MDM Day event in New York on 24th February. We are excited to see you all there and walk with you towards MDM Nirvana.

~Prash
@MDMGeek
www.mdmgeek.com

Share
Posted in Big Data, Customers, DaaS, Data Governance, Master Data Management, PiM, Product Information Management | Tagged , , , , , , | Leave a comment