Stephan Zoder

Stephan Zoder
Stephan Zoder has been the creative engine behind new go-to-market applications of old and new technologies for over a decade. He has been in a variety of regional or global leadership positions in technical sales, professional services, business development and product strategy at a number of industry-leading software vendors in the MDM, CRM, SCM, BI and MRO space. He has worked with Fortune 500 and mid-sized companies alike to help IT and business executives deliver measurable value on a technology solution’s promise. Stephan also brings a wealth of industry knowledge to his role; including energy, telecommunications, healthcare, industrial manufacturing, retail, national and state government, aerospace and automotive. Prior to Informatica, Stephan was responsible for managing IBM’s MDM and industry data warehouse model portfolio strategy for a variety of sectors. In this capacity his expertise contributed to Sunil Soares’ book “Selling information governance to the business”, IBM’s masteringdatamanagement.com blog and AMCIS 2011 paper on “NextGen Analytical MDM”. He now leads Informatica’s effort to assess, quantify, develop and deliver data management-based solutions to clients by being a trusted counsel and advocate to executives. Stephan holds a master’s degree in economic policy from George Washington University. Aside from his wife and 4 children, he is an avid Kendoka and skier.

Becoming a Revenue Driven Business Model through Data is Painful for Government Agencies

Recently, I presented a Business Value Assessment to a client.  The findings were based on a revenue-generating state government agency. Everyone at the presentation was stunned to find out how much money was left on the table by not basing their activities on transactions, which could be cleanly tied to the participating citizenry and a variety of channel partners. There was over $38 million in annual benefits left over, which included partially recovered lost revenue, cost avoidance and reduction. A higher data impact to this revenue driven business model could have prevented this.

Should government leaders go to the “School of Data” to understand where more revenue can be created without necessary tax hikes? (Source:creativecommons.org)

Should government leaders go to the “School of Data” to understand where more revenue can be created without necessary tax hikes? (Source:creativecommons.org)

Given the total revenue volume, this may seem small. However, after factoring in the little technology effort required to “collect and connect” data from existing transactions, it is actually extremely high.

The real challenge for this organization will be the required policy transformation to turn the organization from “data-starved” to “data-intensive”. This would eliminate strategic decisions around new products, locations and customers relying on surveys that face sampling errors, biases, etc. Additionally, surveys are often delayed, making them practically ineffective in this real-time world we live in today.

Despite no applicable legal restrictions, the leadership’s main concern was that gathering more data would erode the public’s trust and positive image of the organization.

To be clear; by “more” data being collected by this type of government agency I mean literally 10% of what any commercial retail entity has gathered on all of us for decades.  This is not the next NSA revelation as any conspiracy theorist may fear.

While I respect their culturally driven self-censorship despite no legal barricades, it raises their stakeholders’ (the state’s citizenry) concern over its performance.  To be clear, there would be no additional revenue for the state’s programs without more citizen data.  You may believe that they already know everything about you, including your income, property value, tax information, etc. However, inter-departmental sharing of criminally-non-relevant information is legally constrained.

Another interesting finding from this evaluation was that they had no sense of conversion rate from email and social media campaigns. Impressions from click-throughs as well as hard/soft bounces were more important than tracking who actually generated revenue.

This is a very market-driven organization compared to other agencies. It actually does try to measure itself like a commercial enterprise and attempts to change in order to generate additional revenue for state programs benefiting the citizenry. I can only imagine what non-revenue-generating agencies (local, state or federal) do in this respect.  Is revenue-oriented thinking something the DoD, DoJ or Social Security should subscribe to?

Think tanks and political pundits are now looking at the trade-off between bringing democracy to every backyard on our globe and its long-term, budget ramifications. The DoD is looking to reduce the active component to its lowest in decades given the U.S. federal debt level.

Putting the data bits and pieces together for revenue

Putting the data bits and pieces together for revenue

recent article in HBR explains that cost cutting has never sustained an organization’s growth over a longer period of time, but new revenue sources did. Is your company or government agency only looking at cost and personnel productivity?

Disclaimer:

Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration, Data Quality, Operational Efficiency, Real-Time | Tagged , , | Leave a comment

Data: The Unsung Hero (or Villain) of every Communications Service Provider

The faceless hero of CSPs: Data

The faceless hero of CSPs: Data

Analyzing current business trends helps illustrate how difficult and complex the Communication Service Provider business environment has become. CSPs face many challenges. Clients expect high quality, affordable content that can move between devices with minimum advertising or privacy concerns. To illustrate this phenomenon, here are a few recent examples:

  • Apple is working with Comcast/NBC Universal on a new converged offering
  • Vodafone purchased the Spanish cable operator, Ono, having to quickly separate the wireless customers from the cable ones and cross-sell existing products
  • Net neutrality has been scuttled in the US and upheld in the EU so now a US CSP can give preferential bandwidth to content providers, generating higher margins
  • Microsoft’s Xbox community collects terabytes of data every day making effective use, storage and disposal based on local data retention regulation a challenge
  • Expensive 4G LTE infrastructure investment by operators such as Reliance is bringing streaming content to tens of millions of new consumers

To quickly capitalize on “new” (often old, but unknown) data sources, there has to be a common understanding of:

  • Where the data is
  • What state it is in
  • What it means
  • What volume and attributes are required to accommodate a one-off project vs. a recurring one

When a multitude of departments request data for analytical projects with their one-off, IT-unsanctioned on-premise or cloud applications, how will you go about it? The average European operator has between 400 and 1,500 (known) applications. Imagine what the unknown count is.

A European operator with 20-30 million subscribers incurs an average of $3 million per month due to unpaid invoices. This often results from incorrect or incomplete contact information. Imagine how much you would have to add for lost productivity efforts, including gathering, re-formatting, enriching, checking and sending  invoices. And this does not even account for late invoice payments or extended incorrect credit terms.

Think about all the wrong long-term conclusions that are being drawn from this wrong data. This single data problem creates indirect cost in excess of three times the initial, direct impact of unpaid invoices.

Want to fix your data and overcome the accelerating cost of change? Involve your marketing, CEM, strategy, finance and sales leaders to help them understand data’s impact on the bottom line.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration, Data Quality, Operational Efficiency | Tagged , , , , | Comments Off

Is your social media investment hampered by your “data poverty”?

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Recently, I talked with a company that had allocated millions of dollars for paid social media promotion. Their hope was that a massive investment in Twitter and Facebook campaigns would lead to “more eyeballs” for their online gambling sites. Although they had internal social media expertise, they lacked a comprehensive partnership with IT. In addition, they lacked a properly funded policy vision. As a result, when asked how much of their socially-driven traffic resulted in actual sales, their answer was a resounding “No Idea.” I attribute this to “data poverty.”

There is a key reason that they were unable to quantify the ROI of their promotion: Their business model is, by design, “data poor.”  Although a great deal of customer data was available to them, they didn’t elect to use it. They could have used available data to identify “known players” as well as individuals with “playing potential.” There was no law prohibiting them from acquiring this data. However, they were uncomfortable obtaining a higher degree of attribution beyond name, address, e-mail and age.  They feared that customers would view them as a commercial counterpart to the NSA. As a result, key data elements like net worth, life-time-value, credit risk, location, marital status, employment status, number of friends/followers and property value we not considered when targeting potential users on social media. So, though the Social Media team considered this granular targeting to be a “dream-come-true,” others within the organization considered it to be too “1984.”

In addition to a hesitation to leverage available data, they were also limited by their dependence on a 3rd party IT provider. This lack of self-sufficiency created data quality issues, which limited their productivity. Ultimately, this dependency prevented them from capitalizing on new market opportunities in a timely way.

It should have been possible for them to craft a multi-channel approach. They ought to have been able to serve up promoted Tweets, banner ads and mobile application ads. They should have been able to track the click-through, IP and timestamp information from each one. They should have been able to make a BCR for redeeming a promotional offer at a retail location.

Strategic channel allocation would certainly have triggered additional sales. In fact, when we applied click-through, CAC and conversion benchmarks to their available transactional information, we modeled over $8 million in additional sales and $3 million in customer acquisition cost savings. In addition to the financial benefits, strategic channel allocation would have generated more data (and resulting insights) about their prospects and customers than they had when they began.

But, because they were hesitant to use all the data available to them, they failed to capitalize on their opportunities. Don’t let this happen to you. Make a strategic policy change to become a data-driven company.

Beyond the revenue gains of targeted social marketing, there are other reasons to become a data-driven company. Clean data can help you correctly identify ideal channel partners. This company failed to use sufficient data to properly select and retain their partners.  Hundreds of channel partners were removed without proper, data-driven confirmation. Reasons for this removal included things like “death of owner,”“fire,” and “unknown”.  To ensure more thorough vetting, the company could have used data points like the owner’s age, past business endeavors and legal proceedings. They could also have have studied location-fenced attributes like footfall, click-throughs and sales cannibalization risk. In fact, when we modeled the potential overall annual savings, across all business scenarios, for becoming a data driven company, the potential savings amount approached $40 million dollars.

Would a $40 million dollar savings inspire you to invest in your data? Would that amount be enough to motivate you to acquire, standardize, deduplicate, link, hierarchically structure and enrich YOUR data? It’s a no brainer. But it requires a policy shift to make your data work for you. Without this, it’s all just “potential”.

Do you have stories about companies that recently switched from traditional operations to smarter, data-driven operations? If so, I’d love to hear from you.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Master Data Management | Tagged , , | 2 Comments

MDM for Utilities: Data Revives a 100 Year-Old Business Model

In both Europe and North America, there’s something profoundly different about today’s Utility Bill. Today’s bill goes well beyond the amount of money you owe for service that month. Today your Utility Bill is full of DATA. Perhaps it contains baseline analytics around usage and temperature, over a twelve month span. Perhaps it contains additional warranties for the power lines that go from the street to your property (typically not covered for repairs by the utility).  In fact, recently, even 3rd party companies have been sending me mail, offering to engage in a fixed-rate payment plan to smooth out my cash flow from the customary projected (from last year) vs actual (read) usage readings. Since smart readers are cropping up everywhere, I was wondering why this “actual vs plan” was still practice, even in the most urban of environments.

MDM for Utilities

Keep flipping-the-switch profitable. Source: thetyee.ca

In fact, a modern utility company has far more intricate data than a consumer sees on a bill. Behind the scenes, utilities leverage a plethora of data pools. Utility companies now have robust asset management, job order and scheduling systems. In addition, they use advanced analytics that monitor sensor data to predict maintenance needs. Mostly importantly, utilities run monthly analytics to prepare rate case requests with local regulators. These are then used to lock in new cost-plus structures for local and business billing in the years ahead.

Unfortunately, most of these applications sit in geographical or departmental silos, and only connect with each other in batch mode, if at all.  These data silos make utilities susceptible to frequent, costly data clean-up projects. These Data clean-up projects always surface the Utility’s shortcomings with regard to data standardization, duplication, linkage and hierarchical structuring. However, until recently, few Utilities were willing to invest to ensure that the newly cleaned data pools remained clean.

Enter MDM for Utilities

Master Data Management is the lynch pin for resolving Utility data issues. Without a clean, enriched, truthful picture of substation, breaker, valve, pump and line information, how can an operator adequately document the need for a rate hike? MDM can help answer questions like:

  • Was that breaker really installed in back in 1900?
  • Or is the year 1900 simply the default date for this data field?
  • Does the substation design mirror what was actually installed?
  • Is the breaker physically located where it is supposed to be?
  • Am I paying maintenance for a breaker that is actually owned by another operator?
  • Why am I sending a crew to inspect equipment that was deemed in-working-order one month earlier?
  • Is the housing development meter really located where the installing contractor claims it was installed?

Without MDM, utilities face all sorts of potential problems:

  1. Maintenance budgets can be either underfunded or overfunded
  2. Job vs bill requests can fail to align with local county delineations
  3. New housing construction can be significantly underbid

MDM for Utilities When a Utility operator uses the wealth of data they possess to optimize their operations, they inevitably reap financial benefits. The Utility company of the future invests in the maintained integrity of their data pool, rather than continually wasting cycles bodies on quarterly data cleansing. To learn how your company can do the same, please register for our Utility Industry MDM webinar on April 1 at 10 AM PST. In the webinar, Informatica and Noah Consulting will address the use cases and financial value MDM can bring to the utility industry.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management, Utilities & Energy | Tagged , , , , | Leave a comment

Would YOU Buy a Ford Pinto Just To Get the Fuzzy Dice?

Today, I am going to take a stab at rationalizing why one could even consider solving a problem with a solution that is well-known to be sub-par. Consider the Ford Pinto: Would you choose this car for your personal, land-based transportation simply because of the new plush dice in the window? For my European readers, replace the Pinto with the infamous Trabant and you get my meaning.  The fact is, both of these vehicles made the list of the “worst cars ever built” due to their mediocre design, environmental hazards or plain personal safety record.

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

Rational people would never choose a vehicle this way. So I always ask myself, “How can IT organizations rationalize buying product X just because product Y is thrown in for free?” Consider the case in which an organization chooses their CRM or BPM system simply because the vendor throws in an MDM or Data Quality Solution for free: Can this be done with a straight face?  You often hear vendors claim that “everything in our house is pre-integrated”, “plug & play” or “we have accelerators for this.” I would hope that IT procurement officers have come to understand that these phrases don’t close a deal in a cloud-based environment. That is even less so in an on-premise construct as it can never achieve this Nirvana unless it is customized based on client requirements.

Anyone can see the logic in getting “2 for the price of 1.” However, as IT procurement organizations seek to save a percentage of money every deal, they can’t lose sight of this key fact:

Standing up software (configuring, customizing, maintaining) and operating it over several years requires CLOSE inspection and scrutiny.

Like a Ford Pinto, Software cannot just be driven off the lot without a care, leaving you only to worry about changing the oil and filters at recommended intervals. Customization, operational risk and maintenance are a significant cost, which all my seasoned padawans will know. If Pinto buyers would have understood the Total Cost of Ownership before they made their purchase, they would have opted for Toyotas instead. Here is the bottom line:

If less than 10% of the overall requirements are solved by the free component
AND (and this is a big AND)
If less than 12% of the overall financial value is provided by the free component
Then it makes ZERO sense select a solution based on freebie add-ons.

When an add-on component is of significantly lower-quality than industry leading solutions, it becomes even more illogical to rely on it simply because it’s “free.” If analysts have affirmed that the leading solutions have stronger capabilities, flexibility and scalability, what does an IT department truly “save” by choosing an inferior “free” add-on?

So just why DO procurement officers gravitate toward “free” add-ons, rather than high quality solutions? As a former procurement manager, I remember the motivations perfectly. Procurement teams are often measured by, and rewarded for, the savings they achieve. Because their motivation is near-term savings, long term quality issues are not the primary decision driver. And, if IT fails to successfully communicate the risks, cost drivers and potential failure rates to Procurement, the motivation to save up-front money will win every time.

Both sellers and buyers need to avoid these dances of self-deception, the “Pre-Integration Tango” and the “Freebie Cha-Cha”.  No matter how much you loved driving that Pinto or Trabant off the dealer lot, your opinion changed after you drove it for 50,000 miles.

I’ve been in procurement. I’ve built, sold and implemented “accelerators” and “blueprints.” In my opinion, 2-for-1 is usually a bad idea in software procurement. The best software is designed to make 1+1=3. I would love to hear from you if you agree with my above “10% requirements/12% value” rule-of-thumb.  If not, let me know what your decision logic would be.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Enterprise Data Management | Tagged , , | 10 Comments

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money

I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it.  Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it.  This coincided with a meeting I had with a European telco’s enterprise architecture team.  There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).

Imagine the cost of other bad data (thecodeproject.com)

Imagine the cost of other bad data (thecodeproject.com)

In my case, I  moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.

For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:

  • In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
  • My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
  • My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
  • My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
  • A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
  • A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
  • My legacy operator turns on the wrong premium channels on half my set-top boxes.
  • The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail).  When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.

Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously.  And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!

One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years.  If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.

However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker.  They had a double whammy in store for me.  First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence.  Then, my new residence’s HOI advised they added my old home’s HOI to my account.

After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.

It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability.  It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month.  Yes, that also happened to me…many years ago.  Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.

We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.

Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP),  free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically.  If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”.  If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.

Tell me about one of your “data-driven” horror scenarios?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, Business/IT Collaboration, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Aggregation, Data Governance, Data Privacy, Data Quality, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Healthcare, Master Data Management, Retail, Telecommunications, Uncategorized, Vertical | Tagged , , , , , , , , , | Leave a comment

Understand Customer Intentions To Manage The Experience

I recently had a lengthy conversation with a business executive of a European telco.  His biggest concern was to not only understand the motivations and related characteristics of consumers but to accomplish this insight much faster than before.  Given available resources and current priorities this is something unattainable for many operators.

Unlike a few years ago – remember the time before iPad – his organization today is awash with data points from millions of devices, hundreds of device types and many applications.

What will he do next?

What will he do next?

One way for him to understand consumer motivation; and therefore intentions, is to get a better view of a user’s network and all related interactions and transactions.  This includes his family household, friends and business network (also a type of household).  The purpose of householding is to capture social and commercial relationships in a grouping of individuals (or businesses or both mixed together) in order to identify patterns (context), which can be exploited to better serve a customer a new individual product or bundle upsell, to push relevant apps, audio and video content.

Let’s add another layer of complexity by understanding not only who a subscriber is, who he knows and how often he interacts with these contacts and the services he has access to via one or more devices but also where he physically is at the moment he interacts.  You may also combine this with customer service and (summarized) network performance data to understand who is high-value, high-overhead and/or high in customer experience.  Most importantly, you will also be able to assess who will do what next and why.

Some of you may be thinking “Oh gosh, the next NSA program in the making”.   Well, it may sound like it but the reality is that this data is out there today, available and interpretable if cleaned up, structured and linked and served in real time.  Not only do data quality, ETL, analytical and master data systems provide the data backbone for this reality but process-based systems dealing with the systematic real-time engagement of consumers are the tool to make it actionable.  If you add some sort of privacy rules using database or application-level masking technologies, most of us would feel more comfortable about this proposition.

This may feel like a massive project but as many things in IT life; it depends on how you scope it.  I am a big fan of incremental mastering of increasingly more attributes of certain customer segments, business units, geographies, where lessons learnt can be replicated over and over to scale.  Moreover, I am a big fan of figuring out what you are trying to achieve before even attempting to tackle it.

The beauty behind a “small” data backbone – more about “small data” in a future post – is that if a certain concept does not pan out in terms of effort or result, you have just wasted a small pile of cash instead of the $2 million for a complete throw-away.  For example: if you initially decided that the central lynch pin in your household hub & spoke is the person, who owns the most contracts with you rather than the person who pays the bills every month or who has the largest average monthly bill, moving to an alternative perspective does not impact all services, all departments and all clients.  Nevertheless, the role of each user in the network must be defined over time to achieve context, i.e. who is a contract signee, who is a payer, who is a user, who is an influencer, who is an employer, etc.

Why is this important to a business? It is because without the knowledge of who consumes, who pays for and who influences the purchase/change of a service/product, how can one create the right offers and target them to the right individual.

However, in order to make this initial call about household definition and scope or look at the options available and sensible, you have to look at social and cultural conventions, what you are trying to accomplish commercially and your current data set’s ability to achieve anything without a massive enrichment program.  A couple of years ago, at a Middle Eastern operator, it was very clear that the local patriarchal society dictated that the center of this hub and spoke model was the oldest, non-retired male in the household, as all contracts down to children of cousins would typically run under his name.  The goal was to capture extended family relationships more accurately and completely in order to create and sell new family-type bundles for greater market penetration and maximize usage given new bandwidth capacity.

As a parallel track aside from further rollout to other departments, customer segments and geos, you may also want to start thinking like another European operator I engaged a couple of years ago.  They were trying to outsource some data validation and enrichment to their subscribers, which allowed for a more accurate and timely capture of changes, often life-style changes (moves, marriages, new job).  The operator could then offer new bundles and roaming upsells. As a side effect, it also created a sense of empowerment and engagement in the client base.

I see bits and pieces of some of this being used when I switch on my home communication systems running broadband signal through my X-Box or set-top box into my TV using Netflix and Hulu and gaming.  Moreover, a US cable operator actively promotes a “moving” package to help make sure you do not miss a single minute of entertainment when relocating.

Every time now I switch on my TV, I get content suggested to me.  If telecommunication services would now be a bit more competitive in the US (an odd thing to say in every respect) and prices would come down to European levels, I would actually take advantage of the offer.  And then there is the log-on pop up asking me to subscribe (or throubleshoot) a channel I have already subscribed to.  Wonder who or what automated process switched that flag.

Ultimately, there cannot be a good customer experience without understanding customer intentions.  I would love to hear stories from other practitioners on what they have seen in such respect

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Integration, Data Quality, Master Data Management, Profiling, Real-Time, Telecommunications, Vertical | Tagged , , , , , , , , , | Leave a comment

Where Is My Broadband Insurance Bundle?

As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast.  If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”

Buying insurance like broadband

Buying insurance like broadband

Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store).  They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)

Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center.  On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.

This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates.  As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.

Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse.  However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.

On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business.  For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub.  This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines.  This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.

On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses.  This makes budgeting of such items like medical advice lines staffed  by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.

Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component.  Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.

This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth.   Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium.  This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.

Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve?  Surely, social media data can help here but it should be a second step after mastering what is available in-house already.  What are some of your experiences how carriers have tried to collect and use core customer data?

Disclaimer:
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Privacy, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Master Data Management, Vertical | Tagged , , , , , , , , | Leave a comment