Josh Lee

Josh Lee
Josh leads global marketing for Informatica in insurance, engaging directly with the insurance marketplace to drive strategy, vision and innovation of everything data in insurance. Josh is an insurance technology industry professional with experience spanning over 17 years in technology, insurance, financial services and consulting. Josh has been instrumental in delivering large enterprise products and solutions to the insurance industry through direct engagement, partner ecosystems and industry standards. Josh's experience includes marketing, enablement and strategy at Microsoft including holding several technology patents. He drove innovation at Risk Management Solutions driving value for clients and the marketplace. Josh also ran delivery for insurance and financial services at MicroStrategy seeing first hand the value of a robust and enterprise strategic data ecosystem from end to end. Besides enterprise data, Josh is passionate about photography, gourmet cooking and his wonderful family.

A New Dimension on a Data-Fueled World

A New Dimension on a Data-Fueled World

A New Dimension on a Data-Fueled World

A Data-Fueled World, Informatica’s new view on data in the enterprise.  I think that we can all agree that technology innovation has changed how we live and view every day life.  But, I want to speak about a new aspect of the data-fueled world.  This is evident now and will be shockingly present in the few years to come.  I want to address the topic of “information workers”.

Information workers deal with information, or in other words, data.  They use that data to do their jobs.  They make decisions in business with that data.  They impact the lives of their clients.

Many years ago, I was part of a formative working group researching information worker productivity.  The idea was to create an index like Labor Productivity indexes.  It was to be aimed at information worker productivity.  By this I mean the analysts, accountants, actuaries, underwriters and statisticians.  These are business information workers.  How productive are they?  How do you measure their output?  How do you calculate an economic cost of more or less productive employees?  How do you quantify the “soft” costs of passing work on to information workers?  The effort stalled in academia, but I learned a few key things.  These points underline the nature of an information worker and impacts to their productivity.

  1. Information workers need data…and lots of it
  2. Information workers use applications to view and manipulate data to get the job done
  3. Degradation, latency or poor ease of use in any of items 1 and 2 have a direct impact on productivity
  4. Items 1 and 2 have a direct correlation to training cost, output and (wait for it) employee health and retention

It’s time to make a super bold statement.  It’s time to maximize your investment in DATA. And past time to de-emphasize investments in applications!  Stated another way, applications come and go, but data lives forever.

My five-year old son is addicted to his iPad.  He’s had one since he was one-year old.  At about the age of three he had pretty much left off playing Angry Birds.  He started reading Wikipedia.  He started downloading apps from the App Store.  He wanted to learn about string theory, astrophysics and plate tectonics.  Now, he scares me a little with his knowledge.  I call him my little Sheldon Cooper.  The apps that he uses for research are so cool.  The way that they present the data, the speed and depth are amazing.  As soon as he’s mastered one, he’s on to the next one.  It won’t be long before he’s going to want to program his own apps.  When that day comes, I’ll do whatever it takes to make him successful.

And he’s not alone.  The world of the “selfie-generation” is one of rapid speed.  It is one of application proliferation and flat out application “coolness”.  High school students are learning iOS programming.  They are using cloud infrastructure to play games and run experiments.  Anyone under the age of 27 has been raised in a mélange of amazing data-fueled computing and mobility.

This is your new workforce.  And on their first day of their new career at an insurance company or large bank, they are handed an aging recycled workstation.  An old operating system follows and mainframe terminal sessions.  Then comes rich-client and web apps circa 2002.  And lastly (heaven forbid) a Blackberry.  Now do you wonder if that employee will feel empowered and productive?  I’ll tell you now, they won’t.  All that passion they have for viewing and interacting with information will disappear.  It will not be enabled in their new work day.  An outright information worker revolution would not surprise me.

And that is exactly why I say that it’s time to focus on data and not on applications.  Because data lives on as applications come and go.  I am going to coin a new phrase.  I call this the Empowered Selfie Formula.  The Empowered Selfie Formula is a way in which the focus on data liberates information workers.  They become free to be more productive in today’s technology ecosystem.

Enable a BYO* Culture

Many organizations have been experimenting with Bring Your Own Device (BYOD) programs.  Corporate stipends that allow employees to buy the computing hardware of their choice.  But let’s take that one step further.  How about a Bring Your Own Application program?  How about a Bring Your Own Codebase program?  The idea is not so far-fetched.  There are so many great applications for working with information.  Today’s generation is learning about coding applications at a rapid pace.  They are keen to implement their own processes and tools to “get the job done”.  It’s time to embrace that change.  Allow your information workers to be productive with their chosen devices and applications.

Empower Social Sharing

Your information workers are now empowered with their own flavors of device and application productivity.  Let them share it.  The ability to share success, great insights and great apps is engrained into the mindset of today’s technology users.  Companies like Tableau have become successful based on the democratization of business intelligence.  Through enabling social sharing, users can celebrate their successes and cool apps with colleagues.  This raises the overall levels of productivity as a grassroots movement.  Communities of best practices begin to emerge creating innovation where not previously seen.

Measure Productivity

As an organization it is important to measure success.  Find ways to capture key metrics in productivity of this new world of data-fueled information work.  Each information worker will typically be able to track trends in their output.  When they show improvement, celebrate that success.

Invest in “Cool”

With a new BYO* culture, make the investments in cool new things.  Allow users to spend a few dollars here and there for training online or in-person.  There they can learn new things will make them more productive.  It will also help with employee retention.  With small investment larger ROI can be realized in employee health and productivity.

Foster Healthy Competition

Throughout history, civilizations that fostered healthy competition have innovated faster.  The enterprise can foster healthy competition on metrics.  Other competition can be focused on new ways to look at information, valuable insights, and homegrown applications.  It isn’t about a “best one wins” competition.  It is a continuing round of innovation winners with lessons learned and continued growth.  These can also be centered on the social sharing and community aspects.  In the end it leads to a more productive team of information workers.

Revitalize Your Veterans

Naturally those information workers who are a little “longer in the tooth” may feel threatened.  But this doesn’t need to be the case.  Find ways to integrate them into the new culture.  Do this through peer training, knowledge transfer, and the data items listed below.  In the best of cases, they too will crave this new era of innovation.  They will bring a lot of value to the ecosystem.

There is a catch.  In order to realize success in the formula above, you need to overinvest in data and data infrastructure.  Perhaps that means doing things with data that only received lip service in the past.  It is imperative to create a competency or center of excellence for all things data.  Trusting your data centers of excellence activates your Empowered Selfie Formula.

Data Governance

You are going to have users using and building new apps and processing data and information in new and developing ways.  This means you need to trust your data.  Your data governance becomes more important.  Everything from metadata, data definition, standards, policies and glossaries need to be developed.  In this way the data that is being looked at can be trusted.  Chief Data Officers should put into place a data governance competency center.  All data feeding and coming from new applications is inspected regularly for adherence to corporate standards.  Remember, it’s not about the application.  It’s about what feeds any application and what data is generated.

Data Quality

Very much a part of data governance is the quality of data in the organization.  Also adhering to corporate standards.  These standards should dictate cleanliness, completeness, fuzzy logic and standardization.  Nothing frustrates an information worker more than building the coolest app that does nothing due to poor quality data.

Data Availability

Data needs to be in the right place at the right time.  Any enterprise data takes a journey from many places and to many places.  Movement of data that is governed and has met quality standards needs to happen quickly.  We are in a world of fast computing and massive storage.  There is no excuse for not having data readily available for a multitude of uses.

Data Security

And finally, make sure to secure your data.  Regardless of the application consuming your information, there may be people that shouldn’t see the data.  Access control, data masking and network security needs to be in place.  Each application from Microsoft Excel to Informatica Springbok to Tableau to an iOS developed application will only interact with the information it should see.

The changing role of an IT group will follow close behind.  IT will essentially become the data-fueled enablers using the principles above.  IT will provide the infrastructure necessary to enable the Empowered Selfie Formula.  IT will no longer be in the application business, aside from a few core corporation applications as a necessary evil.

Achieving a competency in the items above, you no longer need to worry about the success of the Empowered Selfie Formula.  What you will have is a truly data-fueled enterprise.  There will be a new class of information workers enabled by a data-fueled competency.  Informatica is thrilled to be an integral part of the realization that data can play in your journey.  We are energized to see the pervasive use of data by increasing numbers of information workers.  The are creating new and better ways to do business.  Come and join a data-fueled world with Informatica.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data First, Data Governance, Data Quality, Enterprise Data Management | Tagged , , , , | Leave a comment

Time to Change Passwords…Again

Time to Change Passwords…Again

Time to Change Passwords…Again

Has everyone just forgotten about data masking?

The information security industry is reporting that more than 1.5 billion (yes, that’s with a “B”) emails and passwords have been hacked. It’s hard to tell from the article, but this could be the big one. (And just when we thought that James Bond had taken care of the Russian mafia.) From both large and small companies, nobody is safe. According to the experts the sites ranged from small e-commerce sites to Fortune 500 companies.  At this time the experts aren’t telling us who the big targets were.  We could be very unpleasantly surprised.

Most security experts admit that the bulk of the post-breach activity will be email spamming.  Insidious to be sure.  But imagine if the hackers were to get a little more intelligent about what they have.  How many individuals reuse passwords?  Experts say over 90% of consumers reuse passwords between popular sites.  And since email addresses are the most universally used “user name” on those sites, the chance of that 1.5 billion identities translating into millions of pirated activities is fairly high.

According to the recent published Ponemon study; 24% of respondents don’t know where their sensitive data is stored.  That is a staggering amount.  Further complicating the issue, the same study notes that 65% of the respondents have no comprehensive data forensics capability.  That means that consumers are more than likely to never hear from their provider that their data had been breached.  Until it is too late.

So now I guess we all get to go change our passwords again.  And we don’t know why, we just have to.  This is annoying.  But it’s not a permanent fix to have consumers constantly looking over their virtual shoulders.  Let’s talk about the enterprise sized firms first.  Ponemon indicates that 57% of respondents would like more trained data security personnel to protect data.  And the enterprise firm should have the resources to task IT personnel to protect data.  They also have the ability to license best in class technology to protect data.  There is no excuse not to implement an enterprise data masking technology.  This should be used hand in hand with network intrusion defenses to protect from end to end.

Smaller enterprises have similar options.  The same data masking technology can be leveraged on smaller scale by a smaller IT organization including the personnel to optimize the infrastructure.  Additionally, most small enterprises leverage Cloud based systems that should have the same defenses in place.  The small enterprise should bias their buying criteria in data systems for those that implement data masking technology.

Let me add a little fuel to the fire and talk about a different kind of cost.  Insurers cover Cyber Risk either as part of a Commercial General Liability policy or as a separate policy.  In 2013, insurers paid an average approaching $3.5M for each cyber breach claim.  The average per record cost of claims was over $6,000.  Now, imagine your enterprise’s slice of those 1.5 billion records.  Obviously these are claims, not premiums.  Premiums can range up to $40,000 per year for each $1M in coverage.  Insurers will typically give discounts for those companies that have demonstrated security practices and infrastructure.  I won’t belabor the point, it’s pure math at this point.

There is plenty of risk and cost to go around, to be sure.  But there is a way to stay protected with Informatica.  And now, let’s all take a few minutes to go change our passwords.  I’ll wait right here.  There, do you feel better?

For more information on Informatica’s data masking technology click here, where you can drill into dynamic and persistent data masking technology, leading in the industry.  So you should still change your passwords…but check out the industry’s leading data security technology after you do.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data masking, Data Privacy | Tagged , , , | 1 Comment

Master Data and Data Security …It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

The statement on Master Data and Data security was well intended.  I can certainly understand the angst around data security.  Especially after Target’s data breach, it is top of mind for all IT and now business executives.  But the root of the statement was flawed.  And it got me thinking about master data and data security.

“If I use master data technology to create a 360-degree view of my client and I have a data breach, then someone could steal all the information about my client.”

Um, wait, what?  Insurance companies take personally identifiable information very seriously.  The statement is flawed in the relationship between client master data and securing your client data.  Let’s dissect the statement and see what master data and data security really mean for insurers.  We’ll start by level setting a few concepts.

What is your Master Client Record?

Your master client record is your 360-degree view of your client.  It represents everything about your client.  It uses Master Data Management technology to virtually integrate and syndicate all of that data into a single view.  It leverages identifiers to ensure integrity in the view of the client record.  And finally it makes an effort through identifiers to correlate client records for a network effect.

There are benefits to understanding everything about your client.  The shape and view of each client is specific to your business.  As an insurer looks at their policyholders, the view of “client” is based on relationships and context that the client has to the insurer.  This are policies, claims, family relationships, history of activities and relationships with agency channels.

And what about security?

Naturally there is private data in a client record.  But there is nothing about the consolidated client record that contains any more or less personally identifiable information.  In fact, most of the data that a malicious party would be searching for can likely be found in just a handful of database locations.  Additionally breaches happen “on the wire”.  Policy numbers, credit card info, social security numbers, and birth dates can be found in less than five database tables.  And they can be found without a whole lot of intelligence or analysis.

That data should be secured.  That means that the data should be encrypted or masked so that any breach will protect the data.  Informatica’s data masking technology allows this data to be secured in whatever location.  It provides access control so that only the right people and applications can see the data in an unsecured format.  You could even go so far as to secure ALL of your client record data fields.  That’s a business and application choice.  Do not confuse field or database level security with a decision to NOT assemble your golden policyholder record.

What to worry about?  And what not to worry about?

Do not succumb to fear of mastering your policyholder data.  Master Data Management technology can provide a 360-degree view.  But it is only meaningful within your enterprise and applications.  The view of “client” is very contextual and coupled with your business practices, products and workflows.  Even if someone breaches your defenses and grabs data, they’re looking for the simple PII and financial data.  Then they’re grabbing it and getting out. If the attacker could see your 360-degree view of a client, they wouldn’t understand it.  So don’t over complicate the security of your golden policyholder record.  As long as you have secured the necessary data elements, you’re good to go.  The business opportunity cost of NOT mastering your policyholder data far outweighs any imagined risk to PII breach.

So what does your Master Policyholder Data allow you to do?

Imagine knowing more about your policyholders.  Let that soak in for a bit.  It feels good to think that you can make it happen.  And you can do it.  For an insurer, Master Data Management provides powerful opportunities across everything from sales, marketing, product development, claims and agency engagement.  Each channel and activity has discreet ROI.  It also has direct line impact on revenue, policyholder satisfaction and market share.  Let’s look at just a few very real examples that insurers are attempting to tackle today.

  1. For a policyholder of a certain demographic with an auto and home policy, what is the next product my agent should discuss?
  2. How many people live in a certain policyholder’s household?  Are there any upcoming teenage drivers?
  3. Does this personal lines policyholder own a small business?  Are they a candidate for a business packaged policy?
  4. What is your policyholder claims history?  What about prior carriers and network of suppliers?
  5. How many touch points have your agents and had with your policyholders?  Were they meaningful?
  6. How can you connect with you policyholders in social media settings and make an impact?
  7. What is your policyholder mobility usage and what are they doing online that might interest your Marketing team?

These are just some of the examples of very streamlined connections that you can make with your policyholders once you have your 360-degree view. Imagine the heavy lifting required to do these things without a Master Policyholder record.

Fear is the enemy of innovation.  In mastering policyholder data it is important to have two distinct work streams.  First, secure the necessary data elements using data masking technology.  Once that is secure, gain understanding through the mastering of your policyholder record.  Only then will you truly be able to take your clients’ experience to the next level.  When that happens watch your revenue grow in leaps and bounds.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Security, Financial Services, Master Data Management | Tagged , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Oh the Data I’ve Seen…

shutterstock_152663261Eighteen months ago, I was sitting in a conference room, nothing remarkable except for the great view down 6th Avenue toward the Empire State Building.  The pre-sales consultant sitting across from me had just given a visually appealing demonstration to the CIO of a multinational insurance corporation.  There were fancy graphics and colorful charts sharply displayed on an iPad and refreshing every few seconds.  The CIO asked how long it had taken to put the presentation together. The consultant excitedly shared that it only took him four to five hours, to which the CIO responded, “Well, if that took you less than five hours, we should be able to get a production version in about two to three weeks, right?”

The facts of the matter were completely different however. The demo, while running with the firm’s own data, had been running from a spreadsheet, housed on the laptop of the consultant and procured after several weeks of scrubbing, formatting, and aggregating data from the CIO’s team; this does not even mention the preceding data procurement process.  And so, as the expert in the room, the voice of reason, the CIO turned to me wanting to know how long it would take to implement the solution.  At least six months, was my assessment.  I had seen their data, and it was a mess. I had seen the flow, not a model architecture and the sheer volume of data was daunting. If it was not architected correctly, the pretty colors and graphs would take much longer to refresh; this was not the answer he wanted to hear.

The advancement of social media, new web experiences and cutting edge mobile technology have driven users to expect more of their applications.  As enterprises push to drive value and unlock more potential in their data, insurers of all sizes have attempted to implement analytical and business intelligence systems.  But here’s the truth: by and large most insurance enterprises are not in a place with their data to make effective use of the new technologies in BI, mobile or social.  The reality is that data cleanliness, fit for purpose, movement and aggregation is being done in a BI when it should be done lower down so that all applications can take advantage of it.

Let’s face it – quality data is important. Movement and shaping of data in the enterprise is important.  Identification of master data and metadata in the enterprise is important and data governance is important.  It brings to mind episode 165, “The Apology”, of the mega-hit show Seinfeld.  Therein George Costanza accuses erstwhile friend Jason Hanky of being a “step skipper”.  What I have seen in enterprise data is “step skipping” as users clamor for new and better experiences, but the underlying infrastructure and data is less than ready for consumption.  So the enterprise bootstraps, duct tapes and otherwise creates customizations where it doesn’t architecturally belong.

Clearly this calls for a better solution; A more robust and architecturally sustainable data ecosystem, which shepherds the data from acquisition through to consumption and all points in between. It also must be attainable by even modestly sized insurance firms.

First, you need to bring the data under your control.  That may mean external data integration, or just moving it from transactional, web, or client-server systems into warehouses, marts or other large data storage schemes and back again.  But remember, the data is in various stages of readiness.  This means that through out of the box or custom cleansing steps the data needs to be processed, enhanced and stored in a way that is more in line with corporate goals for governing the quality of that data.  And this says nothing of the need to change a data normalization factor between source and target.  When implemented as a “factory” approach, the ability to bring new data streams online, integrate them quickly and maintain high standards become small incremental changes and not a ground up monumental task.  Move your data shaping, cleansing, standardization and aggregation further down in the stack and many applications will benefit from the architecture.

Critical to this process is that insurance enterprises need to ensure the data remains secure, private and is managed in accordance with rules and regulations. They must also govern the archival, retention and other portions of the data lifecycle.

At any point in the life of your information, you are likely sending or receiving data from an agent, broker, MGA or service provider, which needs to be processed using the robust ecosystem, described above. Once an effective data exchange infrastructure is implemented, the steps to process the data can nicely complement your setup as information flows to and from your trading partners.

Finally, as your enterprise determines “how” to implement these solutions, you may look to a cloud based system for speed to market and cost effectiveness compared to on-premises solutions.

And don’t forget to register for Informatica World 2014 in Las Vegas, where you can take part in sessions and networking tailored specifically for insurers.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration Platform, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , , , | Leave a comment