Tag Archives: data domain

Sensational Find – $200 Million Hidden in a Teenager’s Bedroom!

That tag line got your attention – did it not?  Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it.  I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application.  These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed.  And now they bought him a smart phone to fix it.  So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.

Finding the asset in your teenager's mess

Finding the asset in your teenager’s mess

Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility.  Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.

My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.

The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations).  On the utility side, the go to area for opportunity appeared to be maintenance operations.  Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.

After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).

The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts.  The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.

All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information.  They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.

While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.

So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.

Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).

Marketing (Trading) was surprisingly open about their issues.  They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.).  As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.

Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).

A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.

One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types).  Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.

When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.

Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.

Disclaimer:
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Data Aggregation, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , , , , | Leave a comment

Top 10 Best Practices for Successful MDM Implementations

Recently Aaron Zornes and I hosted a webinar on the “Top 10 Best Practices for Successful MDM Implementations.” Almost 700 people registered for it – one of the highest number of registrations I’ve seen for a webinar, ever! This number tells me that many practitioners are eager to learn the best ways to ensure that their Master Data Management (MDM) implementations are successful.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , | Leave a comment

Leading Research Firm’s Take on Multidomain MDM in Its 2010 Predictions

Just this week, leading research firm Gartner, Inc. published its 2010 predictions for MDM. There is one prediction related to multidomain MDM that I found particularly interesting. It mentions that the number of companies shopping for multidomain MDM solutions has increased. Now, why is that?

To get some insight into MDM purchasing and implementation trends, we simply need to look at companies that began their MDM journey in the past five years, especially those companies that started off with a single domain, such as customer data.  Many of these MDM pioneers have since expanded their implementation to other domains such as finished products, materials, price, employees, and so on. But how did they do that? By using the same multidomain MDM platform? Or by separately implementing distinct single-domain MDM applications, such as one for customer data and another for product data?

Gartner contends that no vendor has a comprehensive multidomain MDM technology that handles all different industry use cases using different data domains. A true statement if you are purchasing an “MDM application.” Similar to packaged applications, like ERP or CRM which manage back-office or front-office operations, purpose-built “MDM applications” that focus on a single data domain for a certain industry can, in fact, only handle use cases that are specific to that data domain. So, Gartner is right in saying that a customer that uses “MDM applications” will have to work with different MDM vendors and technologies.

However, this should not be the case if you use an “MDM platform.” We can think about the situation as similar to database or web server technology; these technologies are pretty horizontal and flexible enough to address just about any use case in any industry. In the same way, a multidomain “MDM platform” is flexible enough to accommodate any data domain, and has the ability to cleanse, enrich, match, merge, and display data relationships across multiple domains.

What I don’t agree with in the Gartner predictions is the statement that only large vendors will provide “stronger” multidomain MDM. In my experience, these vendors are largely packaged application vendors, and coming from that heritage, they currently sell different single-domain “MDM applications.” While they talk up “multidomain MDM,” their customer base tells a different story – they have to use multiple distinct MDM applications because no single MDM application can accommodate diverse use cases involving different data domains. In contrast, Siperian has customers in different verticals using our multidomain “MDM platform” to manage multiple data domains on the same platform. Our customers don’t need different “MDM applications” because they’re fully capable of implementing multidomain MDM on their single Siperian platform.

Stay tuned for my forthcoming blog discussing the differences between the “MDM application” and “MDM platform” approaches for cross-industry, multidomain MDM use cases.

In the meantime, what do you think?

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , , , , , , | Leave a comment