Tag Archives: insurance

Master Data and Data Security …It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

The statement on Master Data and Data security was well intended.  I can certainly understand the angst around data security.  Especially after Target’s data breach, it is top of mind for all IT and now business executives.  But the root of the statement was flawed.  And it got me thinking about master data and data security.

“If I use master data technology to create a 360-degree view of my client and I have a data breach, then someone could steal all the information about my client.”

Um, wait, what?  Insurance companies take personally identifiable information very seriously.  The statement is flawed in the relationship between client master data and securing your client data.  Let’s dissect the statement and see what master data and data security really mean for insurers.  We’ll start by level setting a few concepts.

What is your Master Client Record?

Your master client record is your 360-degree view of your client.  It represents everything about your client.  It uses Master Data Management technology to virtually integrate and syndicate all of that data into a single view.  It leverages identifiers to ensure integrity in the view of the client record.  And finally it makes an effort through identifiers to correlate client records for a network effect.

There are benefits to understanding everything about your client.  The shape and view of each client is specific to your business.  As an insurer looks at their policyholders, the view of “client” is based on relationships and context that the client has to the insurer.  This are policies, claims, family relationships, history of activities and relationships with agency channels.

And what about security?

Naturally there is private data in a client record.  But there is nothing about the consolidated client record that contains any more or less personally identifiable information.  In fact, most of the data that a malicious party would be searching for can likely be found in just a handful of database locations.  Additionally breaches happen “on the wire”.  Policy numbers, credit card info, social security numbers, and birth dates can be found in less than five database tables.  And they can be found without a whole lot of intelligence or analysis.

That data should be secured.  That means that the data should be encrypted or masked so that any breach will protect the data.  Informatica’s data masking technology allows this data to be secured in whatever location.  It provides access control so that only the right people and applications can see the data in an unsecured format.  You could even go so far as to secure ALL of your client record data fields.  That’s a business and application choice.  Do not confuse field or database level security with a decision to NOT assemble your golden policyholder record.

What to worry about?  And what not to worry about?

Do not succumb to fear of mastering your policyholder data.  Master Data Management technology can provide a 360-degree view.  But it is only meaningful within your enterprise and applications.  The view of “client” is very contextual and coupled with your business practices, products and workflows.  Even if someone breaches your defenses and grabs data, they’re looking for the simple PII and financial data.  Then they’re grabbing it and getting out. If the attacker could see your 360-degree view of a client, they wouldn’t understand it.  So don’t over complicate the security of your golden policyholder record.  As long as you have secured the necessary data elements, you’re good to go.  The business opportunity cost of NOT mastering your policyholder data far outweighs any imagined risk to PII breach.

So what does your Master Policyholder Data allow you to do?

Imagine knowing more about your policyholders.  Let that soak in for a bit.  It feels good to think that you can make it happen.  And you can do it.  For an insurer, Master Data Management provides powerful opportunities across everything from sales, marketing, product development, claims and agency engagement.  Each channel and activity has discreet ROI.  It also has direct line impact on revenue, policyholder satisfaction and market share.  Let’s look at just a few very real examples that insurers are attempting to tackle today.

  1. For a policyholder of a certain demographic with an auto and home policy, what is the next product my agent should discuss?
  2. How many people live in a certain policyholder’s household?  Are there any upcoming teenage drivers?
  3. Does this personal lines policyholder own a small business?  Are they a candidate for a business packaged policy?
  4. What is your policyholder claims history?  What about prior carriers and network of suppliers?
  5. How many touch points have your agents and had with your policyholders?  Were they meaningful?
  6. How can you connect with you policyholders in social media settings and make an impact?
  7. What is your policyholder mobility usage and what are they doing online that might interest your Marketing team?

These are just some of the examples of very streamlined connections that you can make with your policyholders once you have your 360-degree view. Imagine the heavy lifting required to do these things without a Master Policyholder record.

Fear is the enemy of innovation.  In mastering policyholder data it is important to have two distinct work streams.  First, secure the necessary data elements using data masking technology.  Once that is secure, gain understanding through the mastering of your policyholder record.  Only then will you truly be able to take your clients’ experience to the next level.  When that happens watch your revenue grow in leaps and bounds.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Security, Financial Services, Master Data Management | Tagged , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Oh the Data I’ve Seen…

shutterstock_152663261Eighteen months ago, I was sitting in a conference room, nothing remarkable except for the great view down 6th Avenue toward the Empire State Building.  The pre-sales consultant sitting across from me had just given a visually appealing demonstration to the CIO of a multinational insurance corporation.  There were fancy graphics and colorful charts sharply displayed on an iPad and refreshing every few seconds.  The CIO asked how long it had taken to put the presentation together. The consultant excitedly shared that it only took him four to five hours, to which the CIO responded, “Well, if that took you less than five hours, we should be able to get a production version in about two to three weeks, right?”

The facts of the matter were completely different however. The demo, while running with the firm’s own data, had been running from a spreadsheet, housed on the laptop of the consultant and procured after several weeks of scrubbing, formatting, and aggregating data from the CIO’s team; this does not even mention the preceding data procurement process.  And so, as the expert in the room, the voice of reason, the CIO turned to me wanting to know how long it would take to implement the solution.  At least six months, was my assessment.  I had seen their data, and it was a mess. I had seen the flow, not a model architecture and the sheer volume of data was daunting. If it was not architected correctly, the pretty colors and graphs would take much longer to refresh; this was not the answer he wanted to hear.

The advancement of social media, new web experiences and cutting edge mobile technology have driven users to expect more of their applications.  As enterprises push to drive value and unlock more potential in their data, insurers of all sizes have attempted to implement analytical and business intelligence systems.  But here’s the truth: by and large most insurance enterprises are not in a place with their data to make effective use of the new technologies in BI, mobile or social.  The reality is that data cleanliness, fit for purpose, movement and aggregation is being done in a BI when it should be done lower down so that all applications can take advantage of it.

Let’s face it – quality data is important. Movement and shaping of data in the enterprise is important.  Identification of master data and metadata in the enterprise is important and data governance is important.  It brings to mind episode 165, “The Apology”, of the mega-hit show Seinfeld.  Therein George Costanza accuses erstwhile friend Jason Hanky of being a “step skipper”.  What I have seen in enterprise data is “step skipping” as users clamor for new and better experiences, but the underlying infrastructure and data is less than ready for consumption.  So the enterprise bootstraps, duct tapes and otherwise creates customizations where it doesn’t architecturally belong.

Clearly this calls for a better solution; A more robust and architecturally sustainable data ecosystem, which shepherds the data from acquisition through to consumption and all points in between. It also must be attainable by even modestly sized insurance firms.

First, you need to bring the data under your control.  That may mean external data integration, or just moving it from transactional, web, or client-server systems into warehouses, marts or other large data storage schemes and back again.  But remember, the data is in various stages of readiness.  This means that through out of the box or custom cleansing steps the data needs to be processed, enhanced and stored in a way that is more in line with corporate goals for governing the quality of that data.  And this says nothing of the need to change a data normalization factor between source and target.  When implemented as a “factory” approach, the ability to bring new data streams online, integrate them quickly and maintain high standards become small incremental changes and not a ground up monumental task.  Move your data shaping, cleansing, standardization and aggregation further down in the stack and many applications will benefit from the architecture.

Critical to this process is that insurance enterprises need to ensure the data remains secure, private and is managed in accordance with rules and regulations. They must also govern the archival, retention and other portions of the data lifecycle.

At any point in the life of your information, you are likely sending or receiving data from an agent, broker, MGA or service provider, which needs to be processed using the robust ecosystem, described above. Once an effective data exchange infrastructure is implemented, the steps to process the data can nicely complement your setup as information flows to and from your trading partners.

Finally, as your enterprise determines “how” to implement these solutions, you may look to a cloud based system for speed to market and cost effectiveness compared to on-premises solutions.

And don’t forget to register for Informatica World 2014 in Las Vegas, where you can take part in sessions and networking tailored specifically for insurers.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration Platform, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , , , | Leave a comment

Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money

I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it.  Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it.  This coincided with a meeting I had with a European telco’s enterprise architecture team.  There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).

Imagine the cost of other bad data (thecodeproject.com)

Imagine the cost of other bad data (thecodeproject.com)

In my case, I  moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.

For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:

  • In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
  • My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
  • My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
  • My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
  • A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
  • A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
  • My legacy operator turns on the wrong premium channels on half my set-top boxes.
  • The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail).  When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.

Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously.  And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!

One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years.  If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.

However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker.  They had a double whammy in store for me.  First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence.  Then, my new residence’s HOI advised they added my old home’s HOI to my account.

After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.

It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability.  It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month.  Yes, that also happened to me…many years ago.  Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.

We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.

Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP),  free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically.  If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”.  If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.

Tell me about one of your “data-driven” horror scenarios?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, Business/IT Collaboration, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Aggregation, Data Governance, Data Privacy, Data Quality, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Healthcare, Master Data Management, Retail, Telecommunications, Uncategorized, Vertical | Tagged , , , , , , , , , | Leave a comment

Where Is My Broadband Insurance Bundle?

As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast.  If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”

Buying insurance like broadband

Buying insurance like broadband

Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store).  They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)

Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center.  On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.

This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates.  As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.

Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse.  However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.

On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business.  For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub.  This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines.  This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.

On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses.  This makes budgeting of such items like medical advice lines staffed  by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.

Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component.  Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.

This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth.   Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium.  This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.

Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve?  Surely, social media data can help here but it should be a second step after mastering what is available in-house already.  What are some of your experiences how carriers have tried to collect and use core customer data?

Disclaimer:
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Privacy, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Master Data Management, Vertical | Tagged , , , , , , , , | Leave a comment

Managing Risk and Compliance in Financial Services with Informatica Vibe!

Last week at Informatica World 2013, Informatica introduced Vibe, the industry’s first and only embeddable virtual data machine (VDM), designed to embed data management into the next generation of applications for the integrated information age. This unique capability offers technology for banks and insurance companies to scale and improve their data management, integration, and governance processes to manage risk and ensure ongoing compliance with a host of industry regulations from Basel III, Dodd Frank, to Solvency II. Why is Vibe unique and how does it help with risk management and regulatory compliance? 

The data required for risk and compliances originates from tens if not hundreds of systems across all lines of business including loan origination systems, loan servicing, credit card processors, deposit servicing, securities trading, brokerage, call center, online banking, and more. Not to mention external data providers for market, pricing, positions, and corporate actions information.  The volumes are greater than ever, the systems range from legacy mainframe trading systems to mobile banking applications, the formats vary across the board from structured, semi-structured, and unstructured, and a wide range of data standards must be dealt with including MISMO®, FpML®, FIX®, ACORD®, to SWIFT to name a few.  Take all that into consideration and the data administration, management, governance, and integration work required is massive, multifaceted, and fraught with risk and hidden costs often caused by custom coded processes or use of standalone tools.

The Informatica Platform and Vibe can help by allowing our customers to take advantage of ever evolving data technologies and innovations without having to recode and develop a lean data management process that turns unique works of art into reusable artifacts across the information supply chain. In other words, Vibe powers the unique “Map Once. Deploy Anywhere.” capabilities of the Informatica Platform accelerates project delivery by 5x and makes the entire data lifecycle easier to manage and eliminates the risks, costs, and short lived value associated with hand coding or using standalone tools to do this work.  Here are some examples of Vibe for risk and compliance:

  • Built data quality rules to standardize address information, remove or consolidate duplicates, translate or standardize reference data, and other critical information to calculate risk within your ETL process or as a “Data Quality Validation” service in upstream systems
  • Build rules to standardize wire transfer data to the latest SWIFT formats within your payment hubs as well as leverage the same rules in facilitating payment transactions with your counterparties.
  • Build and execute complex parsing and transformation processes leveraging the power of Hadoop to handle large volumes of structured and unstructured data to analytics and utilize the same rules in downstream credit, operational, and market risk data warehouses.
  • Define standard data masking rules once, and leverage it when using data with sensitive information for testing and develop as well as enforcing data access rights for ongoing data privacy compliance.

 The “Map Once. Deploy Anywhere.” capabilities inherent to Vibe drive:

  • Faster adoption of new technologies and data – Banks and insurance companies can take rapid advantage of new data and technologies without having to know the details of the underlying platform, or having to hire highly specialized and costly programming resources. 
  • Reduced complexity through insulation from change – When data type, volume, source, platform or users change, financial institutions can simply redeploy their existing data integration instructions without re-specification, redesign or redevelopment on a new integration technology – like Hadoop.

Vibe is NOT a new product offering. It is a unique capability that Informatica supports through our existing platform comprised of our Data Integration, Data Quality, Master Data Management, and Informatica Life Cycle Management products.  Whether it is Dodd Frank, Basel III, FATCA, or Solvency II, with Vibe, banks and insurance companies can ensure they have the right data and increase the potential to improve how they measure risk and ensure regulatory compliance. Visit Informatica’s Banking/Capital Markets and Insurance industry solutions section of our website for more information on how we help today’s global financial services industry.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Governance, Data Integration, Data Integration Platform, Data Quality, Financial Services | Tagged , | Leave a comment

Bankers, Insurers – How Customer Centric Are You?

The need to be more customer-centric in financial services is more important than ever as banks and insurance companies look for ways to reduce churn as those in the industry know that loyal customers spend more on higher margin products and are likely to refer additional customers. Bankers and insurers who understand this, and get this right, are in a better position to maintain profitable and lasting customer loyalty and reap significant financial rewards. The current market conditions remain significant and will be difficult to overcome without the right information management architecture to help companies be truly customer centric. Here’s why:

  • Customer satisfaction with retail banks has decreased for four consecutive years, with particularly low scores in customer service.[1] Thirty-seven percent of customers who switched primary relationships cited in an industry survey showed poor customer service as the main reasons.
  • The commoditization of traditional banking and insurance products has rapidly increased client attrition and decreased acquisition rates. Industry reports estimate that banks are losing customers at an average rate of 12.5% per year, while average acquisition rates are at 13.5%, making acquisitions nearly a zero-sum game. Further, the cost of acquiring new customers is estimated at five times the rate of retaining existing ones.
  • Switching is easier than ever before. Customer churn is at an all-time high in most European countries. According to an industry survey, 42 percent of German banking customers had been with their main bank for less than a year. As customer acquisition costs running between of €200 to €400, bankers and insurers need to keep their clients at least 5 to 7 years to simply break even.
  • Mergers and acquisitions impact even further the complexity and risks of maintaining customer relationships. According to a recent study, 17 percent of respondents who had gone through a merger or acquisition had switched at least one of their accounts to another institution after their bank was acquired, while an additional 31 percent said they were at least somewhat likely to switch over the next year.[2]

Financial services professionals have long recognized the need to manage customer relationships vs. account relationships by shifting away from a product-centric culture toward a customer-centric model to maintain client loyalty and grow their bottom lines organically. Here are some reasons why:

  • A 5% increase in customer retention can increase profitability by 35% in banking, 50% in brokerage, and 125% in the consumer credit card market.[3]
  • Banks can add more than $1 million to the profitability of their commercial banking business line by simply extending 16 of these large corporate relationships by one year, or by saving two such clients from defecting. In the insurance sector, a one percent increase in customer retention results in $1M in revenue.
  • The average company has between a 60% and 70% probability of success selling more services to a current customer, a 20% to 40% probability of selling to a former customer, and a 5% to 20% probability of making a sale to a prospect.[4]
  • Up to 66% of current users of financial institutions’ social media sites engage in receiving information about financial services, 32% use it to retrieve information about offers or promotions and 30% to conduct customer service related activities.[5]

So what does it take to become more Customer-centric?

Companies who have successful customer centric business models share similar cultures of placing the customer first, people who are willing to go that extra mile, business processes designed with the customer’s needs in mind, product and marketing strategy that is designed to meet a customer’s needs, and technology solutions that helps access and deliver trusted, timely, and comprehensive information and intelligence across the business. These technologies include

Why is data integration important? Customer centricity begins with the ability to access and integrate your data regardless of format, source system, structure, volume, latency, from any location including the cloud and social media sites. The data business needs originates from many different systems across the organization and outside including new Software as a Service solutions and cloud based technologies. Traditional hand coded methods and one off tools and open source data integration tools are not able to scale and perform to effectively and efficiently access, manage, and deliver the right data to the systems and applications in the front lined. A the same time, we live in the Big Data era with increasing transaction volumes, new channel adoption including mobile devices and social media combined generating petabytes of data of which to support a capable and sustainable customer centric business model, requires technology that can handle this complexity, scale with the business, while reducing costs and improving productivity.

Data quality issues must be dealt with proactively and managed by both business and technology stakeholders.  Though technology itself cannot prevent all data quality errors from happening, it is a critical part of your customer information management process to ensure any issues that exist are identified and dealt with in an expeditious manner. Specifically, a Data Quality solution that can help detect data quality errors in any source, allow business users to define data quality rules, support seamless consumption of those rules by developers to execute, dashboards and reports for business stakeholders, and ongoing quality monitoring to deal with time and business sensitive exceptions. Data quality management can only scale and deliver value if an organization believes and manages data as an asset. It also helps to have a data governance framework consisting of processes, policies, standards, and people from business and IT working together in the process.

Lastly, growing your business, improving wallet share, retaining profitable relationships, and lowering the cost of managing customer relationships requires a single, trusted, holistic, and authoritative source of customer information.  Managing customer information has historically been in applications across traditional business silos that lacked any common processes to reconcile duplicate and conflicting information across business systems.  Master Data Management solutions are purposely designed to help breakdown the traditional application and business silos and helps deliver that single view of the truth for all systems to benefit.  Master Data Management allows banks and insurance companies to access, identity unique customer entities, relate accounts to each customer, and extend that relationship view across other customers and employees including relationship bankers, financial advisors, to existing agents and brokers.

The need to attract and retain customers is a continuous journey for the financial industry however that need is greater than ever before. The foundation for successful customer centricity requires technology that can help access and deliver trusted, timely, consistent, and comprehensive customer information and insight across all channels and avoid the mistakes of the past, allow you to stay ahead of your competition, and maximize value for your shareholders.

[1] 2010 UK Retail Banking Satisfaction Study, J.D. Power and Associates, October 2010.

FacebookTwitterLinkedInEmailPrintShare
Posted in Customer Acquisition & Retention, Data Governance, Data Integration, Data Quality, Financial Services, Master Data Management, Vertical | Tagged , , , , , , , | Leave a comment

Maximize the Potential Business Value from New Core Banking/Insurance Application Investments

 

According to the IDC Financial Insights 2013 Predictions report, financial institutions across most regions are getting serious about updating their legacy systems to improve reduce operating costs, automate labor intensive processes, improve customer experiences, and avoid costly disruptions. Transforming a bank’s core systems or insurance provider’s main business systems is a strategic decision that has far-reaching implications on the firm’s future business strategies and success. When done right, the capabilities offered in today’s modern banking and insurance platforms can propel a company in front of their competition or be the nail in the coffin if your data is not migrated correctly, safeguards are not in place to protect against unwanted data breaches, and if you are not able to decommission those old systems as planned.  

One of the most important and critical phases of any legacy modernization project is the process of migrating data from old to new.  Migrating data involves:

  • Ability to access existing data in the legacy systems
  • Understand the data structures that need to be migrated
  • Transform and execute one-to-one mapping with the relevant fields in the new system
  • Identify data quality errors and other gaps in the data
  • Validate what is entered into the new system by identifying transformation or mapping errors
  • Seamlessly connect to the target tables and fields in the new system

Sounds easy enough right?  Not so fast! (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Data Archiving, Data Governance, Data Integration, Data Quality, Financial Services | Tagged , , , , , | 1 Comment

Reliable, Trusted, and Accurate Data is More Important for Insurance Companies Post-Hurricane Sandy

Like most Americans last week, I was glued to the news several days prior to Hurricane Sandy hitting landfall on the East Coast of the United States, hoping it would pass with minimal damage. Having lived in Hawaii and Florida for most of my life, I personally experienced three hurricanes and know how devastating these natural disasters can be during the storm and the hardships people go through afterwards. My thoughts are with all those who lost their lives and their belongings due to this disaster.

Hurricane Sandy has been described as one of the largest storms both in size and in property damage to homes and businesses. According to the New York Times, the total economic damage from Hurricane Sandy will range between $10 to $20 billion with insurance companies paying for $5 to $10 billion in insurance claims. At the high end of that range, Sandy would become the third-most expensive storm for insurers in U.S. history. As property, casualty and flood insurance companies prepare to face a significant wave of calls and claims requests from policyholders, I wonder what the implications and costs will be for these companies who lack reliable, trusted and accurate data which has plagued the industry industry for years.

Reliable, trusted, and accurate data is critical in helping insurance companies manage their business from satisfying regulatory requirements, maintaining and growing customer relationships, combating fraud, to reducing the cost of doing business. Unfortunately, many insurance companies, large and small, have long operated on paper-based processes to onboard new customers, manage policy changes and process claim requests. Though some firms have invested in data quality and governance practices in recent years, the majority of today’s insurance industry has ignored the importance of managing and governing good quality data and dealing with the root causes to bad data including:

  • Inadequate verification of data stored in legacy systems
  • Non-validated data leaks and data entry errors made by human beings
  • Inadequate or manual integration of data between systems
  • Redundant data sources/stores that cause data corruption to dependent applications
  • Direct back-end updates with little to no data verification and impact analysis

Because of this, the data in core insurance systems can contain serious data quality errors including:

  • Invalid property addresses
  • Policyholder contact details (Name, Address, Phone numbers)
  • Policy codes and descriptions (e.g. motor or home property)
  • Risk rating codes
  • Flood zone information
  • Property assessment values and codes
  • Loss ratios
  • Claims adjuster estimates and contact information
  • Lack of a comprehensive view of existing policyholder information across different policy coverage categories and lines of business

The cost of bad data can be measured in the following areas as firms gear up to deal with the fallout of Hurricane Sandy:

  • Number of claims errors multiplied by the time and cost to resolve these errors
  • Number of phone calls and emails concerning claims processing delays multiplied by the time per phone call and the cost per Customer Service Rep or field agents handling those requests
  • Number of fraudulent claims and the loss of funds from those criminal activities
  • Number of policy cancellations caused by poor customer service experienced by existing policy holders
  • Not to mention the reputational damage caused by poor customer service

Having a sound data quality practice requires a well-defined data governance framework consisting of the following elements:

  • Data quality policies that spell out what data are required, how they should be used, managed, updated and retired. More importantly, these policies should be aligned to the company’s goals, defined and maintained by the business, not IT.
  • Data quality processes that involve documented steps to implement and enforce the policies described above.
  • Specific roles including data stewards that represent business organizations, core systems (i.e. Underwriting Data Steward), or Data Category stewards who understand the business definition, requirements and usage of key data assets by the business.

Finally, in addition to the points listed above, firms must not discount or ignore the importance of having industry leading data quality software solutions to enable an effective and sustainable data quality practice including:

  • Data profiling and auditing to identify existing data errors in source systems, during data entry processes and as data is extracted and shared between systems.
  • Data quality and cleansing to build and execute data quality rules to enforce the policies set forth by the business.
  • Address Validation solutions to ensure accurate address information for flood zone mapping and loss analysis
  • Data Quality dashboards and monitoring solutions to analyze the performance and quality levels of data and escalate data errors that require immediate attention.

As cleanup activities progress and people get back on their feet from Hurricane Sandy, insurance companies should take the time to measure how well they are managing their data quality challenges and start looking at addressing them in preparation for these inevitable events caused by Mother Nature.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality, Financial Services, Vertical | Tagged , , , , | Leave a comment