Tag Archives: risk

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Big Data is Here to Stay – Now We Get to Manage It

It’s official, “big data” is here to stay and the solutions, concepts, hardware and services to support these massive implementations are going to continue to grow at a rapid pace. However, every organization has their own definition of big data and how it plays in their organization. One area that we are seeing a lot of activity in is “big transaction data” for OLTP and relational databases because relational databases often lack the true management capabilities to scale transactional applications into the higher TBs and PBs.  In this post we will explore some ways that your existing OLTP system can scale without crushing IT and your budget in the process. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Big Data | Tagged , , , | Leave a comment

What Do “Clouds” Look Like in China?

Current situation

After having lived in China since May 2012, I’ve been fortunate to have met with the leaders of most multinational software companies, leaders of local firms as well as industry analysts. My inspiration for this blog is based on a conversation I had with a senior leader at a leading “cloud” provider. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Cloud Computing | Tagged , , , , , , , , , , , , , | 1 Comment

The Next Frontier of Data Security: Minimizing the Threat of an Internal Data Breach

Adam Wilson, General Manager of ILM at Informatica talks about the next frontier of data security. The more data that is passed around internally, the more risk your company runs for a data breach. Find out why auditors are taking a closer look at the number of internal data copies that are floating around and what it means for your company’s risk of a data leak.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data masking | Tagged , , | Leave a comment

Addressing the Big Data Backup Challenge with Database Archiving

In a recent InformationWeek blog, “Big Data A Big Backup Challenge”, George Crump aptly pointed out the problems of backing up big data and outlined some best practices that should be applied to address them, including:

  • Identifying which data can be re-derived and therefore doesn’t need to be backed up
  • Eliminating redundancy, file de-duplication, and applying data compression
  • Using storage tiering and the combination of online disk and tapes to reduce storage cost and optimize performance (more…)
FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Database Archiving, Financial Services, Governance, Risk and Compliance, Healthcare, Operational Efficiency, Public Sector, Telecommunications | Tagged , , , , | Leave a comment

Data is the Answer, Now What’s the Question? Hint: It’s The Key to Optimizing Enterprise Applications

Data quality improvement isn’t really anything new; it’s been around for some time now.  Fundamentally the goal of cleansing, standardizing and enriching enterprise data through data quality processes remains the same.  What’s different now, however, is that in an increasingly competitive marketplace and in difficult economic times, a complete enterprise data quality management approach can separate the leaders from the laggards.  With a sound approach to enterprise data quality management, organizations reap the benefits of turning enterprise data into a key strategic asset. This helps to increase revenue, eliminate costs and reduce risks.  Using the right solution, organizations can leverage data in a way never possible before, holistically and proactively, by addressing data quality issues when and where they arise.  Doing so ensures key IT initiatives, like business intelligence, master data management, and enterprise applications, deliver on their promises of better business results. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Master Data Management, Profiling | Tagged , , , , | Leave a comment

Protecting Healthcare Data

Richard Cramer, Chief Healthcare Strategist at Informatica talks about protecting healthcare data in non-production testing environments.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Governance, Risk and Compliance, Healthcare, SaaS | Tagged , , , , , , , , , | Leave a comment

Customer Retention And Monitoring Product/Service Life Cycle Events

In my last blog post, I commented that organizations that monitor for and react to product/service-related life cycle events can proactively manage customer retention, and suggested that quality data was critical to meet those retention objectives. I can share a concrete example from a discussion I had with a C-level executive at an office supplies company. His organization religiously tracked paper and ink buying patterns for those customers that had purchased a printer. His retention scheme relied on accurate knowledge of the client and the products the client had purchased.

He made a number of assumptions based on knowledge of each printer’s service life cycle. For example, he estimated the size of the business in relation to the printers number of pages printed per month. In turn, he was able to anticipate when the client was just about to run out of paper or when it was time to order new ink cartridges. By proactively contacting the customer directly about three weeks ahead of the time he expected them to need more paper or ink, he was able to capture the follow-on sales, increase customer satisfaction, and consequently, elongate customer lifetime. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Governance, Data Quality, Governance, Risk and Compliance | Tagged , , , , | Leave a comment