Tag Archives: insurance

What are Incorrect Addresses Costing your Company?

I live in a small town in Maine. Between my town and the surrounding three towns, there are seven Main Streets and three Annis Roads or Lanes (and don’t get me started on the number of Moose Trails). If your insurance company wants to market to or communicate with someone in my town or one of the surrounding towns, how can you ensure that the address that you are sending material to is correct? What is the cost if material is sent to an incorrect or outdated address? What is the cost to your insurance company if a provider sends the bill out to the wrong ?

How much is poor address quality costing your business? It doesn’t just impact marketing where inaccurate address data translates into missed opportunity – it also means significant waste in materials, labor, time and postage . Bills may be delivered late or returned with sender unknown, meaning additional handling times, possible repackaging, additional postage costs (Address Correction Penalties) and the risk of customer service issues. When mail or packages don’t arrive, pressure on your customer support team can increase and your company’s reputation can be negatively impacted. Bills and payments may arrive late or not at all directly impacting your cash flow. The cost of bad address data causes inefficiencies and raises costs across your entire organization.

The best method for handling address correction is through a validation and correction process:

Address+Doctor

What are Incorrect Addresses Costing your Company? There are Steps to Follow

When trying to standardize member or provider information one of the first places to look is address data. If you can determine that John Q Smith that lives at 134 Main St in Northport, Maine 04843 is the same John Q Smith that lives at 134 Maine Street in Lincolnville, Maine 04849, you have provided a link between two members that are probably considered distinct in your systems. Once you can validate that there is no 134 Main St in Northport according to the postal service, and then can validate that 04849 is a valid zip code for Lincolnville – you can then standardize your address format to something along the lines of: 134 MAIN ST LINCOLNVILLE,ME 04849. Now you have a consistent layout for all of your addresses that follows postal service standards. Each member now has a consistent address which is going to make the next step of creating a golden record for each member that much simpler.

Think about your current method of managing addresses. Likely, there are several different systems that capture addresses with different standards for what data is allowed into each field – and quite possibly these independent applications are not checking or validating against country postal standards.  By improving the quality of address data, you are one step closer to creating high quality data that can provide the up-to-the minute accurate reporting your organization needs to succeed.

Share
Posted in 5 Sales Plays, B2B, Big Data, Business Impact / Benefits, Customer Acquisition & Retention, Customer Services, Customers, Data Quality, Data Synchronization, Healthcare, Master Data Management, Total Customer Relationship | Tagged , , , , | Leave a comment

The New Insurance Model

insurance

The New Insurance Model

Make it about me
I know I’m not alone in feeling unimportant when I contact large organisations and find they lack the customer view we’re all being told we can expect in a digital, multichannel age. I have to pro-act to get things done. I have to ask my insurance provider, for example, if my car premium reflects my years of loyalty, or if I’m due a multi-policy discount.

The time has come for insurers to focus on how they use data for true competitive advantage and customer loyalty. In this void, with a lack of tailored service, I will continue to shop around for something better. It doesn’t have to be like this.

Know data – no threat
A new report from KPMG, Transforming Insurance: Securing competitive advantage (download the pdf here) explores the viable use of data for predictable analytics in insurance. The report finds that almost two thirds of insurer respondents to its survey only use analytics for reporting what happened, rather than for driving future customer interactions. This is a process that tends to take place in distinct data silos, focused on an organisation’s internal business divisions, rather than on customer engagements.

The report missed a critical point. The discussion for insurers is not around data analytics – to an extent they do that already. The focus needs to shift quickly to understanding the data they already have and using it to augment their capabilities. ‘Transformation’ is a huge step. ‘Augmentation’ can be embarked on with no delay and at relatively low costs. It will keep insurers ahead of new market threats.

New players have no locked-down idea about how insurance models should work, but they do recognise how to identify customer needs through the data their customers freely provide. Tesco made a smooth transition from Club Card to insurance provider because it had the data necessary to market the propositions its customers needed. It knew a lot about them. What is there to stop other data-driven organisations like Amazon, Google, and Facebook from entering the market? The barrier for entry has never been lower, and those with a data-centric understanding of their customers are poised to scramble over it.

Changing the design point – thinking data first
There is an immediate strategic need for the insurance sector to view data as more than functional – information to define risk categories and set premiums. In the light of competitive threats, the insurance industry has to recognise and harness the business value of the vast amounts of data it has collected and continues to gather. A new design point is needed – one that creates a business architecture which thinks Data First.

To adopt a data first architecture is to augment the capabilities a company already has. The ‘nirvana’ business model for the insurer is to expand customer propositions beyond the individual (party, car, house, health, annuity) to the household (similar profiles, easier profiling). Based on the intelligent use of data, policy-centric grows to customer-centricity, with a viable evolution path to household-centricity, untied to legacy limitations.

Win back the customer
Changing the data architecture is a pragmatic solution to a strategic problem. By putting data first, insurers can find the golden nuggets already sitting in their systems. They can make the connections across each customer’s needs and life-stage. By trusting the data, insurers can elevate the quality of their customer service to a level of real personal care, enabling them to secure the loyalty of their customers before the market starts to rumble as new players make their pitch.

Focusing on a data architecture, the organisation also takes complexity out of the eco-system and creates headroom for innovation – fresh ideas around cross-sell and up-sell, delivering more complete and loyalty-generating service offerings to customers. Loyalty fosters trust, driving stronger relationships between insurer and client.

Insurers have the power – they have the data – to ensure that when next time someone like me makes contact they can impress me, sell me more, make me happier and, above all, make me stay.

Nirvana.

Share
Posted in B2B, Business Impact / Benefits, DaaS, Data First, Data Services | Tagged , , , | Leave a comment

Informatica and Hortonworks Talk Analytics in Insurance

analytics

Informatica and Hortonworks Talk Analytics in Insurance

On March 25th, Josh Lee, Global Director for Insurance Marketing at Informatica and Cindy Maike, General Manager, Insurance at Hortonworks, will be joining the Insurance Journal in a webinar on “How to Become an Analytics Ready Insurer”.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Josh and Cindy exchange perspectives on what “analytics ready” really means for insurers, and today we are sharing some of our views (join the webinar to learn more). Josh and Cindy offer perspectives on the five questions posed here. Please join Insurance Journal, Informatica and Hortonworks on March 25th for more on this exciting topic.

See the Hortonworks site for a second posting of this blog and more details on exciting innovations in Big Data.

  1. What makes a big data environment attractive to an insurer?

CM: Many insurance companies are using new types of data to create innovative products that better meet their customers’ risk needs. For example, we are seeing insurance for “shared vehicles” and new products for prevention services. Much of this innovation is made possible by the rapid growth in sensor and machine data, which the industry incorporates into predictive analytics for risk assessment and claims management.

Customers who buy personal lines of insurance also expect the same type of personalized service and offers they receive from retailers and telecommunication companies. They expect carriers to have a single view of their business that permeates customer experience, claims handling, pricing and product development. Big data in Hadoop makes that single view possible.

JL: Let’s face it, insurance is all about analytics. Better analytics leads to better pricing, reduced risk and better customer service. But here’s the issue. Existing data sources are costly in storing vast amounts of data and inflexible to adapt to changing needs of innovative analytics. Imagine kicking off a simulation or modeling routine one evening only to return in the morning and find it incomplete or lacking data that requires a special request of IT.

This is where big data environments are helping insurers. Larger, more flexible data sets allowing longer series of analytics to be run, generating better results. And imagine doing all that at a fraction of the cost and time of traditional data structures. Oh, and heaven forbid you ask a mainframe to do any of this.

  1. So we hear a lot about Big Data being great for unstructured data.  What about traditional data types that have been used in insurance forever?

CM: Traditional data types are very important to the industry – it drives our regulatory reporting and much of the performance management reporting. This data will continue to play a very important role in the insurance industry and for companies.

However, big data can now enrich that traditional data with new data sources for new insights. In areas such as customer service and product personalization, it can make the difference between cross-selling the right products to meet customer needs and losing the business. For commercial and group carriers, the new data provides the ability to better analyze risk needs, price accordingly and enable superior service in a highly competitive market.

JL: Traditional data will always be around. I doubt that I will outlive a mainframe installation at an insurer; which makes me a little sad. And for many rote tasks like financial reporting, a sales report, or a commission statement, those are sufficient. However, the business of insurance is changing in leaps and bounds. Innovators in data science are interested in correlating those traditional sources to other creative data to find new products, or areas to reduce risk. There is just a lot of data that is either ignored or locked in obscure systems that needs to be brought into the light. This data could be structured or unstructured, it doesn’t matter, and Big Data can assist there.

  1. How does this fit into an overall data management function?

JL: At the end of the day, a Hadoop cluster is another source of data for an insurer. More flexible, more cost effective and higher speed; but yet another data source for an insurer. So that’s one more on top of relational, cubes, content repositories, mainframes and whatever else insurers have latched onto over the years. So if it wasn’t completely obvious before, it should be now. Data needs to be managed. As data moves around the organization for consumption, it is shaped, cleaned, copied and we hope there is governance in place. And the Big Data installation is not exempt from any of these routines. In fact, one could argue that it is more critical to leverage good data management practices with Big Data not only to optimize the environment but also to eventually replace traditional data structures that just aren’t working.

CM: Insurance companies are blending new and old data and looking for the best ways to leverage “all data”. We are witnessing the development of a new generation of advanced analytical applications to take advantage of the volume, velocity, and variety in big data. We can also enhance current predictive models, enriching them with the unstructured information in claim and underwriting notes or diaries along with other external data.

There will be challenges. Insurance companies will still need to make important decisions on how to incorporate the new data into existing data governance and data management processes. The Chief Data or Chief Analytics officer will need to drive this business change in close partnership with IT.

  1. Tell me a little bit about how Informatica and Hortonworks are working together on this?

JL: For years Informatica has been helping our clients to realize the value in their data and analytics. And while enjoying great success in partnership with our clients, unlocking the full value of data requires new structures, new storage and something that doesn’t break the bank for our clients. So Informatica and Hortonworks are on a continuing journey to show that value in analytics comes with strong relationships between the Hadoop distribution and innovative market leading data management technology. As the relationship between Informatica and Hortonworks deepens, expect to see even more vertically relevant solutions and documented ROI for the Informatica/Hortonworks solution stack.

CM: Informatica and Hortonworks optimize the entire big data supply chain on Hadoop, turning data into actionable information to drive business value. By incorporating data management services into the data lake, companies can store and process massive amounts of data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field.

Matching data from internal sources (e.g. very granular data about customers) with external data (e.g. weather data or driving patterns in specific geographic areas) can unlock new revenue streams.

See this video for a discussion on unlocking those new revenue streams. Sanjay Krishnamurthi, Informatica CTO, and Shaun Connolly, Hortonworks VP of Corporate Strategy, share their perspectives.

  1. Do you have any additional comments on the future of data in this brave new world?

CM: My perspective is that, over time, we will drop the reference to “big” or ”small” data and get back to referring simply to “Data”. The term big data has been useful to describe the growing awareness on how the new data types can help insurance companies grow.

We can no longer use “traditional” methods to gain insights from data. Insurers need a modern data architecture to store, process and analyze data—transforming it into insight.

We will see an increase in new market entrants in the insurance industry, and existing insurance companies will improve their products and services based upon the insights they have gained from their data, regardless of whether that was “big” or “small” data.

JL: I’m sure that even now there is someone locked in their mother’s basement playing video games and trying to come up with the next data storage wave. So we have that to look forward to, and I’m sure it will be cool. But, if we are honest with ourselves, we’ll admit that we really don’t know what to do with half the data that we have. So while data storage structures are critical, the future holds even greater promise for new models, better analytical tools and applications that can make sense of all of this and point insurers in new directions. The trend that won’t change anytime soon is the ongoing need for good quality data, data ready at a moment’s notice, safe and secure and governed in a way that insurers can trust what those cool analytics show them.

Please join us for an interactive discussion on March 25th at 10am Pacific Time/ 1pm Eastern Time.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Share
Posted in Big Data, Data Quality, Financial Services, Hadoop | Tagged , , , , | Leave a comment

How Insurance Companies Benefits from Information Management

insurance

A single trusted source of master data fuels applications with clean, consistent and connected customer, policy and claims information

Insurance is a highly competitive business that hinges on providing the right products and industry-leading service to customers. Accurate data is the lifeblood of this business. To overcome their struggle with fragmented data across product lines, functions and channels, a leading US-based insurance company has built a world-class information management practice to enable the company to quickly collect and analyze data, whether it is financial, claims, policy or customer data.

“Our advances in technology and distribution channel diversification along with increased brand awareness and innovative products and services — are moving us closer to our goal of becoming a top-five personal lines carrier, “ said the president of the insurance company.


Delivering real-time access to the Total Customer Relationship across channels, functions and product lines

The insurance company’s data integration and data management initiative included:

  • an Enterprise Data Warehouse (EDW) from Teradata,
  • reporting from MicroStrategy, and
  • data integration and master data management (MDM) technology from Informatica to better manage customer and policy data.

This provides the information infrastructure required to propel the insurance company’s personal insurance business into the top tier of personal insurers in the country.

Business analysts in claims, marketing, policy and finance as well as agents in the field, sales people and claims adjusters now have access to clean, consistent and connected customer information for analytics, performance measurement and decision-making.

Within their business applications, the Information Management team has delivered real-time access to the total customer relationship across product lines, channels and functions.

How did they do it?

The team identified the data sources that contain valuable customer information. They’re accessing that customer information using data integration and bringing into a central location, the Informatica MDM hub, where it’s managed, mastered and shared with downstream systems on an ongoing basis, again using data integration. The company now has a “golden record” for each customer, policy and claim and the information continuously updated.

Who benefited?

  1. Claims: The customer information management initiative was instrumental in the successful implementation of a new system to streamline and optimize the company’s claims process. The data integration/data management team leveraged a strong relationship with the claims team to delve into business needs, design the system, and build it from the ground up. Better knowledge of the total customer relationship is accelerating the claims process, leading to increased customer satisfaction and employee performance.
  2. Sales & Customer Service: Now when a customer logs into the company’s website, the system makes a call to the Informatica MDM hub and immediately finds out every policy, every claim, and all other relevant data on the customer and displays it on the screen. Better knowledge of the total customer relationship is leading to better and more relevant insurance products and services, higher customer satisfaction and better sales, marketing and agent performance.

Please join us on Thursday, February 5th at 11am PT for a webinar. You will learn how OneAmerica®, which offers life insurance, retirement services and employee benefits, is shifting from a policy-centric to a customer-centric operation. Nicolas Lance, Vice President of Retirement Income Strategies and Head of Customer Data will explain how this shift will enable the company to improve customer experience and marketing, distribution partner and call and service center effectiveness.

Register here: http://infa.media/1xWlHop

Share
Posted in Master Data Management, Total Customer Relationship | Tagged , , | Leave a comment

Magical Data from the Internet of Things? Think again…

Internet of Things

Internet of Things

I recently read an opinion piece written in an insurance publication online. The author postulated, among other things, that the Internet of Things would magically deliver great data to an insurer. Yes, it was a statement just that glib. Almost as if there is some fantastic device that you just plug into the wall and out streams a flow of unicorns and rainbows. And furthermore that those unicorns and rainbows will subsequently give a magical boost to your business. But hey, you plugged in that fantastic device, so bring on the magic.

Now, let’s come back from the land of fairytales and ground ourselves in reality. Data is important, no doubt about that. Today, financial services firms are able to access data from so many new data sources. One of those new and fancy data sources is the myriad of devices in this thing we call the Internet of Things.

You ever have one of those frustrating days with your smart phone? Dropped calls, slow Internet, Facebook won’t locate you? Well, other devices experience the same wonkiness. Even the most robust of devices found on commercial aircraft or military equipment are not lossless in data transmission. And that’s where we are with the Internet of Things. All great devices, they serve a number of purposes, but are still fallible in communicating with the “mother ship”.

A telematics device in a consumer vehicle can transmit, VIN, speed, latitude/longitude, time, and other vehicle statuses for use in auto insurance. As with other devices on a network, some of these data elements will not come through reliably. That means that in order to reconstruct or smooth the set of data, interpolations need to be made and/or entire entries deleted as useless. That is the first issue. Second, simply receiving this isolated dataset does not make sense of it. The data needs to be moved, cleansed and then correlated to other pieces of the puzzle, which eventually turn into a policyholder, an account holder, a client or a risk. And finally, that enhanced data can be used for further analytics. It can be archived, aggregated, warehoused and secured for additional analysis. None of these activities happen magically. And the sheer volume of integration points and data requires a robust and standardized data management infrastructure.

So no, just having an open channel to the stream of noise from your local Internet of Things will not magically deliver you great data. Great data comes from market leading data management solutions from Informatica. So whether you are an insurance company, financial services firm or data provider, being “Insurance Ready” means having great data; ready to use; everywhere…from Informatica.

Share
Posted in Data Integration Platform, Data Quality, Financial Services, Master Data Management | Tagged , , , | Leave a comment

Big Data Helps Insurance Companies Create A More Protected World

Big Data Helps Insurance Companies

Big Data Helps Insurance Companies

One of the most data-driven industries in the world is the insurance industry.  If you think about it, the entire business model of insurance companies is based on pooling the perceived risk of catastrophic events in a way that is viable and profitable for the insurer.  So it’s no surprise that one of the earliest adopter of big data technologies like Hadoop has been the insurance industry.

Insurance companies serve as a fantastic example of big data technology use since data is such a pervasive asset in the business.  From a cost savings and risk mitigation standpoint, insurance companies use data to assess the probable maximum loss of catastrophic events as well as detect the potential for fraudulent claims.  From a revenue growth standpoint, insurance companies use data to intelligently price new insurance offerings and deploy cross-sell offers to customers to maximize their lifetime value.

New data sources are enabling insurance companies to mitigate risk and grow revenues even more effectively.  Location-based data from mobile devices and sensors are being used inside insured properties to proactively detect exposure to catastrophic events and deploy preventive maintenance.  For example, automobile insurance providers are increasingly offering usage-based driving programs, whereby insured individuals install a mobile sensor inside their car to relay the quality of their driving back to their insurance provider in exchange for lower premiums.  Even healthcare insurance providers are starting to analyze the data collected by wearable fitness bands and smart watches to monitor insured individuals and inform them of personalized ways to be healthier.  Devices can also be deployed in the environment that triggers adverse events, such as sensors to monitor earthquake and weather patterns, to help mitigate the costs of potential events.  Claims are increasingly submitted with supporting information in a variety of formats like text files, spreadsheets, and PDFs that can be mined for insights as well.  And with the growth on insurance sales online, web log and clickstream data is more important than ever to help drive online revenue.

Beyond the benefits of using new data sources to assess risk and grow revenues, big data technologies are enabling insurance companies to fundamentally rethink the basis of their analytical architecture.  In the past, probable maximum loss modeling could only be performed on statistically aggregated datasets.  But with big data technologies, insurance companies have the opportunity to analyze data at the level of an insured individual or a unique insurance claim.  This increased depth of analysis has the potential to radically improve the quality and accuracy of risk models and market predictions.

Informatica is helping insurance companies accelerate the benefits of big data technologies.  With multiple styles of ingestion available, Informatica enables insurance companies to leverage nearly any source of data.  Informatica Big Data Edition provides comprehensive data transformations for ETL and data quality, so that insurance companies can profile, parse, integrate, cleanse, and refine data using a simple user-friendly visual development environment.  With built-in data lineage tracking and support for data masking, Informatica helps insurance companies ensure regulatory compliance across all data.

To try out the Big Data Edition, download a free trial today in the Informatica Marketplace and get started with big data today!

Share
Posted in Big Data, Data Integration | Tagged , , , | Leave a comment

Master Data and Data Security …It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

The statement on Master Data and Data security was well intended.  I can certainly understand the angst around data security.  Especially after Target’s data breach, it is top of mind for all IT and now business executives.  But the root of the statement was flawed.  And it got me thinking about master data and data security.

“If I use master data technology to create a 360-degree view of my client and I have a data breach, then someone could steal all the information about my client.”

Um, wait, what?  Insurance companies take personally identifiable information very seriously.  The statement is flawed in the relationship between client master data and securing your client data.  Let’s dissect the statement and see what master data and data security really mean for insurers.  We’ll start by level setting a few concepts.

What is your Master Client Record?

Your master client record is your 360-degree view of your client.  It represents everything about your client.  It uses Master Data Management technology to virtually integrate and syndicate all of that data into a single view.  It leverages identifiers to ensure integrity in the view of the client record.  And finally it makes an effort through identifiers to correlate client records for a network effect.

There are benefits to understanding everything about your client.  The shape and view of each client is specific to your business.  As an insurer looks at their policyholders, the view of “client” is based on relationships and context that the client has to the insurer.  This are policies, claims, family relationships, history of activities and relationships with agency channels.

And what about security?

Naturally there is private data in a client record.  But there is nothing about the consolidated client record that contains any more or less personally identifiable information.  In fact, most of the data that a malicious party would be searching for can likely be found in just a handful of database locations.  Additionally breaches happen “on the wire”.  Policy numbers, credit card info, social security numbers, and birth dates can be found in less than five database tables.  And they can be found without a whole lot of intelligence or analysis.

That data should be secured.  That means that the data should be encrypted or masked so that any breach will protect the data.  Informatica’s data masking technology allows this data to be secured in whatever location.  It provides access control so that only the right people and applications can see the data in an unsecured format.  You could even go so far as to secure ALL of your client record data fields.  That’s a business and application choice.  Do not confuse field or database level security with a decision to NOT assemble your golden policyholder record.

What to worry about?  And what not to worry about?

Do not succumb to fear of mastering your policyholder data.  Master Data Management technology can provide a 360-degree view.  But it is only meaningful within your enterprise and applications.  The view of “client” is very contextual and coupled with your business practices, products and workflows.  Even if someone breaches your defenses and grabs data, they’re looking for the simple PII and financial data.  Then they’re grabbing it and getting out. If the attacker could see your 360-degree view of a client, they wouldn’t understand it.  So don’t over complicate the security of your golden policyholder record.  As long as you have secured the necessary data elements, you’re good to go.  The business opportunity cost of NOT mastering your policyholder data far outweighs any imagined risk to PII breach.

So what does your Master Policyholder Data allow you to do?

Imagine knowing more about your policyholders.  Let that soak in for a bit.  It feels good to think that you can make it happen.  And you can do it.  For an insurer, Master Data Management provides powerful opportunities across everything from sales, marketing, product development, claims and agency engagement.  Each channel and activity has discreet ROI.  It also has direct line impact on revenue, policyholder satisfaction and market share.  Let’s look at just a few very real examples that insurers are attempting to tackle today.

  1. For a policyholder of a certain demographic with an auto and home policy, what is the next product my agent should discuss?
  2. How many people live in a certain policyholder’s household?  Are there any upcoming teenage drivers?
  3. Does this personal lines policyholder own a small business?  Are they a candidate for a business packaged policy?
  4. What is your policyholder claims history?  What about prior carriers and network of suppliers?
  5. How many touch points have your agents and had with your policyholders?  Were they meaningful?
  6. How can you connect with you policyholders in social media settings and make an impact?
  7. What is your policyholder mobility usage and what are they doing online that might interest your Marketing team?

These are just some of the examples of very streamlined connections that you can make with your policyholders once you have your 360-degree view. Imagine the heavy lifting required to do these things without a Master Policyholder record.

Fear is the enemy of innovation.  In mastering policyholder data it is important to have two distinct work streams.  First, secure the necessary data elements using data masking technology.  Once that is secure, gain understanding through the mastering of your policyholder record.  Only then will you truly be able to take your clients’ experience to the next level.  When that happens watch your revenue grow in leaps and bounds.

Share
Posted in Data Security, Financial Services, Master Data Management | Tagged , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

Share
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

Share
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Oh the Data I’ve Seen…

shutterstock_152663261Eighteen months ago, I was sitting in a conference room, nothing remarkable except for the great view down 6th Avenue toward the Empire State Building.  The pre-sales consultant sitting across from me had just given a visually appealing demonstration to the CIO of a multinational insurance corporation.  There were fancy graphics and colorful charts sharply displayed on an iPad and refreshing every few seconds.  The CIO asked how long it had taken to put the presentation together. The consultant excitedly shared that it only took him four to five hours, to which the CIO responded, “Well, if that took you less than five hours, we should be able to get a production version in about two to three weeks, right?”

The facts of the matter were completely different however. The demo, while running with the firm’s own data, had been running from a spreadsheet, housed on the laptop of the consultant and procured after several weeks of scrubbing, formatting, and aggregating data from the CIO’s team; this does not even mention the preceding data procurement process.  And so, as the expert in the room, the voice of reason, the CIO turned to me wanting to know how long it would take to implement the solution.  At least six months, was my assessment.  I had seen their data, and it was a mess. I had seen the flow, not a model architecture and the sheer volume of data was daunting. If it was not architected correctly, the pretty colors and graphs would take much longer to refresh; this was not the answer he wanted to hear.

The advancement of social media, new web experiences and cutting edge mobile technology have driven users to expect more of their applications.  As enterprises push to drive value and unlock more potential in their data, insurers of all sizes have attempted to implement analytical and business intelligence systems.  But here’s the truth: by and large most insurance enterprises are not in a place with their data to make effective use of the new technologies in BI, mobile or social.  The reality is that data cleanliness, fit for purpose, movement and aggregation is being done in a BI when it should be done lower down so that all applications can take advantage of it.

Let’s face it – quality data is important. Movement and shaping of data in the enterprise is important.  Identification of master data and metadata in the enterprise is important and data governance is important.  It brings to mind episode 165, “The Apology”, of the mega-hit show Seinfeld.  Therein George Costanza accuses erstwhile friend Jason Hanky of being a “step skipper”.  What I have seen in enterprise data is “step skipping” as users clamor for new and better experiences, but the underlying infrastructure and data is less than ready for consumption.  So the enterprise bootstraps, duct tapes and otherwise creates customizations where it doesn’t architecturally belong.

Clearly this calls for a better solution; A more robust and architecturally sustainable data ecosystem, which shepherds the data from acquisition through to consumption and all points in between. It also must be attainable by even modestly sized insurance firms.

First, you need to bring the data under your control.  That may mean external data integration, or just moving it from transactional, web, or client-server systems into warehouses, marts or other large data storage schemes and back again.  But remember, the data is in various stages of readiness.  This means that through out of the box or custom cleansing steps the data needs to be processed, enhanced and stored in a way that is more in line with corporate goals for governing the quality of that data.  And this says nothing of the need to change a data normalization factor between source and target.  When implemented as a “factory” approach, the ability to bring new data streams online, integrate them quickly and maintain high standards become small incremental changes and not a ground up monumental task.  Move your data shaping, cleansing, standardization and aggregation further down in the stack and many applications will benefit from the architecture.

Critical to this process is that insurance enterprises need to ensure the data remains secure, private and is managed in accordance with rules and regulations. They must also govern the archival, retention and other portions of the data lifecycle.

At any point in the life of your information, you are likely sending or receiving data from an agent, broker, MGA or service provider, which needs to be processed using the robust ecosystem, described above. Once an effective data exchange infrastructure is implemented, the steps to process the data can nicely complement your setup as information flows to and from your trading partners.

Finally, as your enterprise determines “how” to implement these solutions, you may look to a cloud based system for speed to market and cost effectiveness compared to on-premises solutions.

And don’t forget to register for Informatica World 2014 in Las Vegas, where you can take part in sessions and networking tailored specifically for insurers.

Share
Posted in Business Impact / Benefits, Data Integration Platform, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , , , | Leave a comment