0

Reliable, Trusted, and Accurate Data is More Important for Insurance Companies Post-Hurricane Sandy

Like most Americans last week, I was glued to the news several days prior to Hurricane Sandy hitting landfall on the East Coast of the United States, hoping it would pass with minimal damage. Having lived in Hawaii and Florida for most of my life, I personally experienced three hurricanes and know how devastating these natural disasters can be during the storm and the hardships people go through afterwards. My thoughts are with all those who lost their lives and their belongings due to this disaster.

Hurricane Sandy has been described as one of the largest storms both in size and in property damage to homes and businesses. According to the New York Times, the total economic damage from Hurricane Sandy will range between $10 to $20 billion with insurance companies paying for $5 to $10 billion in insurance claims. At the high end of that range, Sandy would become the third-most expensive storm for insurers in U.S. history. As property, casualty and flood insurance companies prepare to face a significant wave of calls and claims requests from policyholders, I wonder what the implications and costs will be for these companies who lack reliable, trusted and accurate data which has plagued the industry industry for years.

Reliable, trusted, and accurate data is critical in helping insurance companies manage their business from satisfying regulatory requirements, maintaining and growing customer relationships, combating fraud, to reducing the cost of doing business. Unfortunately, many insurance companies, large and small, have long operated on paper-based processes to onboard new customers, manage policy changes and process claim requests. Though some firms have invested in data quality and governance practices in recent years, the majority of today’s insurance industry has ignored the importance of managing and governing good quality data and dealing with the root causes to bad data including:

  • Inadequate verification of data stored in legacy systems
  • Non-validated data leaks and data entry errors made by human beings
  • Inadequate or manual integration of data between systems
  • Redundant data sources/stores that cause data corruption to dependent applications
  • Direct back-end updates with little to no data verification and impact analysis

Because of this, the data in core insurance systems can contain serious data quality errors including:

  • Invalid property addresses
  • Policyholder contact details (Name, Address, Phone numbers)
  • Policy codes and descriptions (e.g. motor or home property)
  • Risk rating codes
  • Flood zone information
  • Property assessment values and codes
  • Loss ratios
  • Claims adjuster estimates and contact information
  • Lack of a comprehensive view of existing policyholder information across different policy coverage categories and lines of business

The cost of bad data can be measured in the following areas as firms gear up to deal with the fallout of Hurricane Sandy:

  • Number of claims errors multiplied by the time and cost to resolve these errors
  • Number of phone calls and emails concerning claims processing delays multiplied by the time per phone call and the cost per Customer Service Rep or field agents handling those requests
  • Number of fraudulent claims and the loss of funds from those criminal activities
  • Number of policy cancellations caused by poor customer service experienced by existing policy holders
  • Not to mention the reputational damage caused by poor customer service

Having a sound data quality practice requires a well-defined data governance framework consisting of the following elements:

  • Data quality policies that spell out what data are required, how they should be used, managed, updated and retired. More importantly, these policies should be aligned to the company’s goals, defined and maintained by the business, not IT.
  • Data quality processes that involve documented steps to implement and enforce the policies described above.
  • Specific roles including data stewards that represent business organizations, core systems (i.e. Underwriting Data Steward), or Data Category stewards who understand the business definition, requirements and usage of key data assets by the business.

Finally, in addition to the points listed above, firms must not discount or ignore the importance of having industry leading data quality software solutions to enable an effective and sustainable data quality practice including:

  • Data profiling and auditing to identify existing data errors in source systems, during data entry processes and as data is extracted and shared between systems.
  • Data quality and cleansing to build and execute data quality rules to enforce the policies set forth by the business.
  • Address Validation solutions to ensure accurate address information for flood zone mapping and loss analysis
  • Data Quality dashboards and monitoring solutions to analyze the performance and quality levels of data and escalate data errors that require immediate attention.

As cleanup activities progress and people get back on their feet from Hurricane Sandy, insurance companies should take the time to measure how well they are managing their data quality challenges and start looking at addressing them in preparation for these inevitable events caused by Mother Nature.

 

FacebookTwitterLinkedInEmailPrintShare
This entry was posted in Data Governance, Data Quality, Financial Services, Vertical and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>