Category Archives: Data Governance
As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast. If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”
Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store). They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)
Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center. On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.
This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates. As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.
Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse. However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.
On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business. For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub. This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines. This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.
On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses. This makes budgeting of such items like medical advice lines staffed by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.
Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component. Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.
This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth. Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium. This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.
Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve? Surely, social media data can help here but it should be a second step after mastering what is available in-house already. What are some of your experiences how carriers have tried to collect and use core customer data?
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
The challenge for supermarkets today is balancing the needs of the customer against their ability to serve those needs. How are supermarkets and food manufacturers preparing their business for e-readiness? What about more customer centricity?
Currently, brands are not particularly good at serving consistent product information across in-store and online environments, leading to lower conversions and poor customer satisfaction. This shortfall is also preventing these brands from moving forward and innovating with new technologies. As a result, Product Information Management (PIM) is becoming a significant focus in effective omnichannel initiatives.
Consider the large range of products that can be seen at the average grocery store. The sheer number of categories is staggering, before you even consider the quantity of items in each category. There’s little wonder of local brands are struggling to replicate this level of product data anywhere else but on their store shelves.
Furthermore, consider the various kinds of information supermarkets are expected to include. Then, add to this the kinds of information supermarkets could include in order to present a competitive advantage over and above the rest. Information types currently possible are: Ingredients, additives, Images and videos, marketing copy, gene manipulation information, references, product seals, allergens, nutritional facts, translations, product categories, expiration/use-by dates, variants, region-specific information, GSDN information and more.
Ultimately, supermarkets are already on the path of improving consumers’ shopping experience and a few of the emerging technologies indicate the way this industry will continue to evolve.
6 Examples of food retail and supermarket trends
The below six examples demonstrate an emerging trend in grocery shopping, while also highlighting the need for accurate product information creation, curation and distribution.
- Ready-to-cook product bundles: Nice and very customer facing concept is done by German food retailer www.kochhaus.de (meaning house of cooking). The only offer product bundles of all ingredients which are required to cook a certain meal for the required number of guests. It can be seen as the look books which are well established at fashion brands and retailers sales strategy.
- Self-checkout Systems – More supermarkets are beginning to include self-checkouts. American and UK companies lead, Germany or Australia are behind. But there is the same risk of cart abandonment here as there is online, so providing a comprehensive and rich suite of product information at these POS systems is crucial.
- In-store Information Kiosks – Some supermarkets are beginning to include interactive displays in-store, with some even providing tablets mounted onto shopping trolleys. These displays serve in place of an in-store sales assistant, providing consumers with directions, promotions and complete access to product information (such as stock levels) on any item in the store.
- Supermarket Pop-ups – Food retailers are increasingly experimenting and improving the traditional shopping experience. One example that has turned the bricks-and-mortar concept on its head is electronic shopping ‘walls’, where products are prominently displayed in a high-traffic area. Consumers are able to access product details and make purchases by scanning a code presented alongside the image of a given product.
- Store-to-door Delivery Services – It’s starting to become commonplace. Not only are supermarkets offering same-day delivery services, the major brands are also experimenting with click and collect services. These supermarkets are moving toward websites that are just as busy and provide as much, if not more relevant content as their bricks-and-mortar outlets.
- App Commerce: Companies, like German food retailer Edeka offer an app for push marketing, or help matching customer profiles of dietary or allergy profiles with QR-code scanned products on the shopping list within the supermarket app.
What is next?
The supermarket of the future:
Reviving Customer Loyalty with leveraging information potential
Due to the increased transparency brought on by the ‘Google Era’, retailers have experienced a marked decline in customer loyalty. This concept of omnichannel shopping behaviour has led previously loyal customers to shop elsewhere.
Putting customers in the centre of all retail activities may not be a new trend, but in order to achieve it, retailers must foster more intelligent touch points. The supermarkets of the future will combine both product and customer data in such a way that every touch point presents a uniquely personalised experience for the customer, and a single, 360-degree view of the customer to the retailer.
The major supermarket brands already have comprehensive customer loyalty programs and they’re building on these with added products, such as consumer insurance packages. However, these initiatives haven’t necessarily led to an increase in loyalty.
Instead, the imperative to create a personal, intimate connection with consumers will eventually lead to a return in loyalty. The supermarket of the future will be able to send recipe and shopping list recommendations directly to the shopper’s preferred device, taking into account any allergies or delivery preferences.
Gamification as a tool for loyalty?
Moreover, this evolution will slowly lead into another phase of loyalty marketing: gamification. Comprehensive and detailed product data will form the basis of a loyalty program that includes targets, goals and rewards for loyal customers. The more comprehensive and engaging these shopping ‘games’ become, the more successful they will be from a marketing and loyalty perspective. However, the demands for detailed, accurate product information will also increase accordingly.
Private side note: My wife likes the simple Edaka App Game, where users need to cut slices of sausages. The challenge you need to hit exactly the weight the customer requires, like the in-store associate.
Those supermarkets that can deploy these initiatives first – and continue to innovate beyond this point – will have a bright future. Those that lag behind when it comes to leveraging their information and real time process might quickly begin to fade away.
What can I cook of my fridge remains?
I have been working all week long on the next year planning, so my fridge was not feeded well this week. Being almost empty the asks are
- What products are left?
- When do they expire?
- What can I cook of my fridge leftovers? (receipts)
- Where do I get the missing items for dinner with my wife? – And for which price
- Do they all match with my dietary and here allergy to nuts?
- Can I order online?
- When will they get delivered?
- What things can make our evening a success? The right wine recommendation? Two candles?
Well it is up to your imagination which products also can be sold in addition to make the customer happy and create a nice candle light dinner… But at least a good reason to increase the assortment.
That tag line got your attention – did it not? Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it. I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application. These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed. And now they bought him a smart phone to fix it. So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.
Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility. Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.
My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.
The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations). On the utility side, the go to area for opportunity appeared to be maintenance operations. Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.
After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).
The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts. The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.
All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information. They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.
While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.
So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.
Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).
Marketing (Trading) was surprisingly open about their issues. They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.). As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.
Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).
A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.
One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types). Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.
When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.
Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
I’m glad to hear you feel comfortable explaining data to your friends, and I completely understand why you’ll avoid discussing metadata with them. You’re in great company – most business leaders also avoid discussing metadata at all costs! You mentioned during our last call that you keep reading articles in the New York Times about this thing called “Big Data” so as promised I’ll try to explain it as best I can. (more…)
I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO. Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts. Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts. Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data. They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.
If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how. Depending on the business model you may even have to figure out who built it or owns it. All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized. And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.
Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets. Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function. The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement. The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything. Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….
Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.
I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple. None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for. It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system. After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old. Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.
So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child? Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age. This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit. Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.
If you are aware of business models and their difficulties to leverage data, write to me. If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts. Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.
Last week I described how Informatica Identity Resolution (IIR) can be used to match data from different lists or databases even when the data includes typos, translation mistakes, transcription errors, invalid abbreviations, and other errors. IIR has a wide range of use cases. Here are a few. (more…)
We’ve posted three compelling new articles to the Potential at Work for Information Leaders site, including:
- “Will the real Chief Data Officer please stand up?” Some question the need for a new C-level position, arguing that a company’s chief information officer should be the one to oversee an organization’s data. Others argue the CIO is stretched too thin already and a new type of leader must emerge. Where do you stand?
- “Introducing a ‘define once, govern everywhere’ data management style” The sanity afforded by defining data standards only once and applying them anywhere will create time to investigate innovative uses for that data. Information leaders will be much more successful if they spend less time managing projects to recode the same rules across every new application, and instead work with business partners to identify new information opportunities.
- “Rise of the machines: the Internet of Things” Are devices that track our every move poised to unlock new potential in humankind or are they just downright invasive? While privacy remains a critical consideration, this article illustrates the global potential if we can effectively leverage big data to harness the emerging Internet of Things.
For these articles and many more, check out the Potential at Work for Information Leaders community today. Available in nine languages, this site will continue to feature fresh, new ideas to promote the value of information management from a variety of top technology leaders.
When I talk to customers about dealing with poor data quality, I consistently hear something like, “We know we have data quality problems, but we can’t get the business to help take ownership and do something about it.” I think that this is taking the easy way out. Throwing your hands up in the air doesn’t make change happen – it only prolongs the pain. If you want to affect a positive change in data quality and are looking for ways to engage the business, then you should join Barbara Latulippe, Director of Enterprise Information Management for EMC and and Kristen Kokie, VP IT Enterprise Strategic Services for Informatica for our webinar on Thursday October 24th to hear how they have dealt with data quality in their combined 40+ years in IT.
Now, understandably, tackling data quality problems is no small undertaking, and it isn’t easy. In many instances, the reason why organizations choose to do nothing about data quality is that bad data has been present for so long that manual work around efforts have become ingrained in the business processes for consuming data. In these cases, changing the way people do things becomes the largest obstacle to dealing with the root cause of the issues. But that is also where you will be able to find the costs associated with bad data: lost productivity, ineffective decision making, missed opportunities, etc..
As discussed in this previous webinar,(link to replay on the bottom of the page), successfully dealing with poor data quality takes initiative, and it takes communication. IT Departments are the engineers of the business: they are the ones who understand process and workflows; they are the ones who build the integration paths between the applications and systems. Even if they don’t own the data, they do end up owning the data driven business processes that consume data. As such, IT is uniquely positioned to provide customized suggestions based off of the insight from multiple previous interactions with the data.
Bring facts to the table when talking to the business. As those who directly interact daily with data, IT is in position to measure and monitor data quality, to identify key data quality metrics; data quality scorecards and dashboards can shine a light on bad data and directly relate it to the business via the downstream workflows and business processes. Armed with hard facts about impact on specific business processes, a Business user has an easier time affixing a dollar value on the impact of that bad data. Here’s some helpful resources where you can start to build your case for improved data quality. With these tools and insight, IT can start to affect change.
Data is becoming the lifeblood of organizations and IT organizations have a huge opportunity to get closer to the business by really knowing the data of the business. While data quality invariably involves technological intervention, it is more so a process and change management issue that ends up being critical to success. The easier it is to tie bad data to specific business processes, the more constructive the conversation can be with the Business.
A data integration hub is a proven vehicle to provide a self service model for publishing and subscribing data to be made available to a variety of users. For those who deploy these environments for regulated and sensitive data need to think of data privacy and data governance during the design phase of the project.
In the data integration hub architecture, think about how sensitive data will be coming from different locations, from a variety of technology platforms, and certainly from systems being managed by teams with a wide range of data security skills. How can you ensure data will be protected across such a heterogeneous environment? Not to mention if data traverses across national boundaries.
Then think about testing connectivity. If data needs to be validated in a data quality rules engine, in order to truly test this connectivity, there needs to be a capability to test using valid data. However testers should not have access or visibility into the actual data itself if it is classified as sensitive or confidential.
With a hub and spoke model, the rules are difficult to enforce if data is being requested from one country and received in another. The opportunity for exposing human error and potential data leakage increases exponentially. Rather than reading about a breach in the headlines, it may make sense to look at building preventative measures or spending the time and money to do the right thing from the onset of the project.
There are technologies that exist in the market that are easy to implement that are designed to prevent this very type of exposure. This technology is called data masking which includes data obfuscation, encryption and tokenization. Informatica’s Data Privacy solution based on persistent and dynamic data masking options can be easily and quickly deployed without the need to develop code or modify the source or target application.
When developing your reference architecture for a data integration hub, incorporate sound data governance policies and build data privacy into the application upfront. Don’t wait for the headlines to include your company and someone’s personal data.
Whether you are establishing a new outsourced delivery model for your integration services or getting ready for the next round of contract negotiations with your existing supplier, you need a way to hold the supplier accountable – especially when it is an exclusive arrangement. Here are four key metrics that should be included in the multi-year agreement. (more…)