Tag Archives: customer

Where Is My Broadband Insurance Bundle?

As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast.  If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”

Buying insurance like broadband

Buying insurance like broadband

Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store).  They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)

Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center.  On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.

This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates.  As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.

Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse.  However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.

On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business.  For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub.  This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines.  This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.

On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses.  This makes budgeting of such items like medical advice lines staffed  by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.

Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component.  Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.

This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth.   Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium.  This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.

Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve?  Surely, social media data can help here but it should be a second step after mastering what is available in-house already.  What are some of your experiences how carriers have tried to collect and use core customer data?

Disclaimer:
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Privacy, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Master Data Management, Vertical | Tagged , , , , , , , , | Leave a comment

Time-to-Shop is a New Indicator for (R)etailers

Time-to-shop is a new indicator for (r)etailers. It describes how long does it take to make products available for ecommerce sales – or at any touchpoint where customers take purchasing decision or start their product search. Time-to-shop can be seen as a measurable key performance indicator (KPI) and subset of the general time-to-market discussion.

Time-to-market has always been a business critical factor. But today manufacturers and retailers look at it more specific. What are the factors impacting  time-to-market? Time-to-shop defines how long it takes to create a product description with all product information which is required to make the product complete for presenting it in a web shop like Demandware, Intershop, IBM WebSphere Commerce, Oracle ATG Webcommerce or Oxid eSales.

The business processes and parties which contribute or approve product content are complex and often distributed in international acting enterprises. This product information supply chain touches several internal an external people and departments as suppliers, translators, marketing, product management, agencies, purchasing and more.

More and mire companies have defined product data quality as strategic value asset to differentiate themselves in the market. Paul Barron, Halfords Business Consultant at Halfords said: “Product information makes all the difference when it comes to making a purchase decision whether buying online or in-store.”  The Halfords Group is the UK’s leading retailer of automotive, leisure and cycling products and through Halfords Autocentres also one of the UK’s leading independent car servicing and repair operator.

Product content is seen as differentiator by Tom Davis, Global Head of Ecommerce at Puma, who I met at a conference recently. Tom told me that quality data is an important homework but three things are fundamental:

  1. Speed to market
  2. Getting the product information out to the market
  3.  Be quick before margins drop

All point have one thing in common: Speed.  Alexander Pischetsrieder from sports apparel retailer SportScheck (btw Venatana Award Winner for Information Management in 2013) totally agreed to this factors when I interviewed him on the case study.

Therefore our customers build their data governance model to assure their corporate product data quality standards with business processes. A rich product information standard for ecommerce may consist of

  1. Product name
  2. SEO description
  3. Color snippets
  4. Images: minimum of 4
  5. A 360 view file
  6. Product video, minimum 1
  7. USP value proposition text element
  8. Attributes like weight, length,
  9. Cross-sell / minimum of 1 item
  10. Up-sell / minimum of 1 item
  11. Rating/ review

Of course price as well and much more – these elements are examples, based on some customers I talked to recently. If you are selling products you know your market and you will be able to define what is relevant and important to my customers.

Time- to-shop is a new emerging indicator for businesses. But which company can measure this key performance indicator today? Happy to discuss with you.

The enormous growth rate of e-commerce also means an increase in the demand for data quality in the web shop. In this context, the length of time from discovering a fault to its remedy is an important key performance indicator for the web shop. Same is for creating new products or integrating new assortments from suppliers.
The survey www.pim-roi.com states that the use of PIM reduces the period of time significantly from an average of four hours to one hour. This represents an increase in the market speed of 75 %. Time for changing products at an e-commerce site is reduced from 4 hours to 1 hour.
Implementing a product information management is not an IT architecture project only. It is an enabler for effective and rapid process chains from supplier adoption through multichannel commerce to achieve return on investment (ROI).

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management, Product Information Management, Uncategorized | Tagged , , , , | Leave a comment

Matching for Managament: 20 Common Data Errors and Variation

A good friend of mine’s husband is a sergeant on the Chicago police force. Recenlty a crime was committed and a witness insisted that the perpetrator was a woman with blond hair about five nine weighing 160 pounds. She was wearing a gray pinstriped business suit with an Armani scarf and carrying a Gucci handbag. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality | Tagged , , , , , | Leave a comment

Just Ask the Customer

I grabbed my wife’s Harvard Business Review (HBR Jan-Feb 2012) edition before a recent plane ride to a customer meeting.  After diving through a bunch of case study-type narratives I ended up in a section titled “Stop Collecting Customer Data” (page 57), which was part of HBR’s “Audacious Ideas” series.  This series was aimed at showcasing some proclaimed thought leaders’ very forward-thinking and, in my opinion, also some rather ill guided ideas full off naïveté.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , | Leave a comment

Master Data Model Alternatives – Part 2

Last time I introduced two different approaches for master data models and thought it would be worth examining the differences in greater detail.

The first approach is to use pre-packaged core models provided by a vendor as part of an overall MDM suite of tools. Often these types of products evolved out of industry applications in which a common information model was used to support specific types of enterprise applications. For example, a vendor might have analyzed the property and casualty insurance industry and developed core data models for customer, policy, claim, service, financial products, etc. A set of application layers may have been developed on top of these models to implement common workflows (customer risk rating for establishing premium rates, or initiating a claim). However, there is a perception that aspects of those industry-oriented models can be segregated into a more universal format, which can become the starting point for a prepackaged master domain. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , | 1 Comment

Data Governance and Technical Issues

In contrast to addressing the management and process issues, we might say that the technical issues are actually quite straightforward to address. In my original enumeration from a few posts back, I ordered the data issue categories in the reverse order of the complexity of their solution. Model and information architecture problems are the most challenging, because of the depth to which business applications are inherently dependent on their underlying models. Even simple changes require significant review to make sure that no expected capability is inadvertently broken. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality | Tagged , , , , | Leave a comment

What is Change Data Capture? Something Business and IT Both Agree on for Mainframe Data Integration

A few days ago, I got a text message from a friend telling me that my favorite company’s stock price was suddenly tanking and that I should dump my holding.  So I went to the news portal to get a stock quote and see where the stock price happens to be. I found that the stock didn’t move much at all.  Thinking that it might’ve been a prank text message, I ignored it.  To my dismay, the stock quote I saw was delayed by 20 minutes and the decline wasn’t yet reflected in the news portal. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Master Data Management, Uncategorized | Tagged , , , , , , , , , | 1 Comment

Hadoop Enriches Data Science: Part 2 Of Hadoop Series

Enterprises use Hadoop in data-science applications that improve operational efficiency, grow revenues or reduce risk. Many of these data-intensive applications use Hadoop for log analysis, data mining, machine learning or image processing.

Commercial, open source or internally developed data-science applications have to tackle a lot of semi-structured, unstructured or raw data. They benefit from Hadoop’s combination of storage and processing in each data node spread across a cluster of cost-effective commodity hardware. Hadoop’s lack of fixed-schema works particularly well for answering ad-hoc queries and exploratory “what if” scenarios.

Hadoop-Enabled Data-Science Use Cases

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Operational Efficiency | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Big Data Meets Sentiment Analysis!

So now you are interested in proposing Big Data projects, but are skeptical about getting business excited about yet another IT project?  Somehow the business did not want to talk about data integration, data quality and master data management despite all the homework you did to propose a plan of action? Enter sentiment analysis.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO | Tagged , , , , , , , , | 3 Comments