Tag Archives: customer
As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast. If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”
Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store). They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)
Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center. On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.
This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates. As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.
Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse. However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.
On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business. For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub. This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines. This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.
On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses. This makes budgeting of such items like medical advice lines staffed by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.
Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component. Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.
This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth. Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium. This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.
Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve? Surely, social media data can help here but it should be a second step after mastering what is available in-house already. What are some of your experiences how carriers have tried to collect and use core customer data?
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
A good friend of mine’s husband is a sergeant on the Chicago police force. Recenlty a crime was committed and a witness insisted that the perpetrator was a woman with blond hair about five nine weighing 160 pounds. She was wearing a gray pinstriped business suit with an Armani scarf and carrying a Gucci handbag. (more…)
I grabbed my wife’s Harvard Business Review (HBR Jan-Feb 2012) edition before a recent plane ride to a customer meeting. After diving through a bunch of case study-type narratives I ended up in a section titled “Stop Collecting Customer Data” (page 57), which was part of HBR’s “Audacious Ideas” series. This series was aimed at showcasing some proclaimed thought leaders’ very forward-thinking and, in my opinion, also some rather ill guided ideas full off naïveté. (more…)
Last time I introduced two different approaches for master data models and thought it would be worth examining the differences in greater detail.
The first approach is to use pre-packaged core models provided by a vendor as part of an overall MDM suite of tools. Often these types of products evolved out of industry applications in which a common information model was used to support specific types of enterprise applications. For example, a vendor might have analyzed the property and casualty insurance industry and developed core data models for customer, policy, claim, service, financial products, etc. A set of application layers may have been developed on top of these models to implement common workflows (customer risk rating for establishing premium rates, or initiating a claim). However, there is a perception that aspects of those industry-oriented models can be segregated into a more universal format, which can become the starting point for a prepackaged master domain. (more…)
In contrast to addressing the management and process issues, we might say that the technical issues are actually quite straightforward to address. In my original enumeration from a few posts back, I ordered the data issue categories in the reverse order of the complexity of their solution. Model and information architecture problems are the most challenging, because of the depth to which business applications are inherently dependent on their underlying models. Even simple changes require significant review to make sure that no expected capability is inadvertently broken. (more…)
A few days ago, I got a text message from a friend telling me that my favorite company’s stock price was suddenly tanking and that I should dump my holding. So I went to the news portal to get a stock quote and see where the stock price happens to be. I found that the stock didn’t move much at all. Thinking that it might’ve been a prank text message, I ignored it. To my dismay, the stock quote I saw was delayed by 20 minutes and the decline wasn’t yet reflected in the news portal. (more…)
Enterprises use Hadoop in data-science applications that improve operational efficiency, grow revenues or reduce risk. Many of these data-intensive applications use Hadoop for log analysis, data mining, machine learning or image processing.
Commercial, open source or internally developed data-science applications have to tackle a lot of semi-structured, unstructured or raw data. They benefit from Hadoop’s combination of storage and processing in each data node spread across a cluster of cost-effective commodity hardware. Hadoop’s lack of fixed-schema works particularly well for answering ad-hoc queries and exploratory “what if” scenarios.
So now you are interested in proposing Big Data projects, but are skeptical about getting business excited about yet another IT project? Somehow the business did not want to talk about data integration, data quality and master data management despite all the homework you did to propose a plan of action? Enter sentiment analysis. (more…)
Now comes the fun part, inspecting the data. For this step, automated data profiling will help you identify actual problems with the data as they relate to business client expectations. Here are just a few possible issues:
- Are the phone numbers empty?
- Are the admission dates missing in inpatient hospital claims?
- Are there car loans with durations greater than 10 years?
- Do shipping records lack corresponding billing records?
- Do product descriptions differ only slightly?
- Are you delivering products to many different customers with the same address?
- What business rules are being violated? (more…)