Category Archives: Financial Services

What is the Silver Lining in Cloud for Financial Services?

This was a great week of excitement and innovation here in San Francisco starting with the San Francisco Giants winning the National League Pennant for the 3rd time in 5 years on the same day Saleforce’s Dreamforce 2014 wrapped up their largest customer conference with over 140K+ attendees from all over the world talking about their new Customer Success Platform.

Salesforce has come a long way from their humble beginnings as the new kid on the cloud front for CRM. The integrated sales, marketing, support, collaboration, application, and analytics as part of the Salesforce Customer Success Platform exemplifies innovation and significant business value upside for various industries however I see it very promising for today’s financial services industry. However like any new business application, the value business gains from it are dependent in having the right data available for the business.

The reality is, SaaS adoption by financial institutions has not been as quick as other industries due to privacy concerns, regulations that govern what data can reside in public infrastructures, ability to customize to fit their business needs, cultural barriers within larger institutions that critical business applications must reside on-premise for control and management purposes, and the challenges of integrating data to and from existing systems with SaaS applications.  However, experts are optimistic that the industry may have turned the corner. Gartner (NYSE:IT) asserts more than 60 percent of banks worldwide will process the majority of their transactions in the cloud by 2016.  Let’s take a closer look at some of the challenges and what’s required to overcome these obstacles when adopting cloud solutions to power your business.

Challenge #1:  Integrating and sharing data between SaaS and on-premise must not be taken lightly

For most banks and insurance companies considering new SaaS based CRM, Marketing, and Support applications with solutions from Salesforce and others must consider the importance of migrating and sharing data between cloud and on-premise applications in their investment decisions.  Migrating existing customer, account, and transaction history data is often done by IT staff through the use of custom extracts, scripts, and manual data validations which can carry over invalid information from legacy systems making these new application investments useless in many cases.

For example, customer type descriptions from one or many existing systems may be correct in their respective databases however collapsing them into a common field in the target application seems easy to do. Unfortunately, these transformation rules can be complex and that complexity increases when dealing with tens if not hundreds of applications during the migration and synchronization phase. Having capable solutions to support the testing, development, quality management, validation, and delivery of existing data from old to new is not only good practice, but a proven way of avoiding costly workarounds and business pain in the future.

Challenge 2:  Managing and sharing a trusted source of shared business information across the enterprise.

As new SaaS applications are adopted, it is critical to understand how to best govern and synchronize common business information such as customer contact information (e.g. address, phone, email) across the enterprise. Most banks and insurance companies have multiple systems that create and update critical customer contact information, many of them which reside on-premise. For example, insurance customers who update contact information such as a phone number or email address while filing an insurance claim will often result in that claims specialist to enter/update only the claims system given the siloed nature of many traditional banking and insurance companies. This is the power of Master Data Management which is purposely designed to identify changes to master data including customer records in one or many systems, update the customer master record, and share that across other systems that house and require that update is essential for business continuity and success.

In conclusion, SaaS adoption will continue to grow in financial services and across other industries. The silver lining in the cloud is your data and the technology that supports the consumption and distribution of it across the enterprise. Banks and insurance companies investing in new SaaS solutions will operate in a hybrid environment made up of Cloud and core transaction systems that reside on-premise. Cloud adoption will continue to grow and to ensure investments yield value for businesses, it is important to invest in a capable and scalable data integration platform to integrate, govern, and share data in a hybrid eco-system. To learn more on how to deal with these challenges, click here and download a complimentary copy of the new “Salesforce Integration for Dummies”

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Cloud, Financial Services | Tagged , , , , | Leave a comment

Keeping Information Governance Relevant

Gartner’s official definition of Information Governance is “…the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards, and metrics that ensure the effective and efficient use of information in enabling a business to achieve its goals.” It therefore looks to address important considerations that key stakeholders within an enterprise face.

A CIO of a large European bank once asked me – “How long do we need to keep information?”

Keeping Information Governance relevant

Data-Cartoon-2This bank had to govern, index, search, and provide content to auditors to show it is managing data appropriately to meet Dodd-Frank regulation. In the past, this information was retrieved from a database or email. Now, however, the bank was required to produce voice recordings from phone conversations with customers, show the Reuters feeds coming in that are relevant, and document all appropriate IMs and social media interactions between employees.

All these were systems the business had never considered before. These environments continued to capture and create data and with it complex challenges. These islands of information that seemingly do not have anything to do with each other, yet impact how that bank governs itself and how it saves any of the records associated with trading or financial information.

Coping with the sheer growth is one issue; what to keep and what to delete is another. There is also the issue of what to do with all the data once you have it. The data is potentially a gold mine for the business, but most businesses just store it and forget about it.

Legislation, in tandem, is becoming more rigorous and there are potentially thousands of pieces of regulation relevant to multinational companies. Businesses operating in the EU, in particular, are affected by increasing regulation. There are a number of different regulations, including Solvency II, Dodd-Frank, HIPAA, Gramm-Leach-Bliley Act (GLBA), Basel III and new tax laws. In addition, companies face the expansion of state-regulated privacy initiatives and new rules relating to disaster recovery, transportation security, value chain transparency, consumer privacy, money laundering, and information security.

Regardless, an enterprise should consider the following 3 core elements before developing and implementing a policy framework.

Whatever your size or type of business, there are several key processes you must undertake in order to create an effective information governance program. As a Business Transformation Architect, I can see 3 foundation stones of an effective Information Governance Program:

Assess Your Business Maturity

Understand the full scope of requirements on your business is a heavy task. Assess whether your business is mature enough to embrace information governance. Many businesses in EMEA do not have an information governance team already in place, but instead have key stakeholders with responsibility for information assets spread across their legal, security, and IT teams.

Undertake a Regulatory Compliance Review

Understand the legal obligations to your business are critical in shaping an information governance program. Every business is subject to numerous compliance regimes managed by multiple regulatory agencies, which can differ across markets. Many compliance requirements are dependent upon the numbers of employees and/or turnover reaching certain limits. For example, certain records may need to be stored for 6 years in Poland, yet the same records may need to be stored for 3 years in France.

Establish an Information Governance Team

It is important that a core team be assigned responsibility for the implementation and success of the information governance program. This steering group and a nominated information governance lead can then drive forward operational and practical issues, including; Agreeing and developing a work program, Developing policy and strategy, and Communication and awareness planning.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Financial Services, Governance, Risk and Compliance | Tagged , , , , , | 1 Comment

Emerging Markets: Does Location Matter?

I recently wrapped up two overseas trips; one to Central America and another to South Africa. As such, I had the opportunity to meet with a national bank and a regional retailer. It prompted me to ask the question: Does location matter in emerging markets?

I wish I could tell you that there was a common theme on how firms in the same sector or country (even city) treat data on a philosophical or operational level but I cannot.   It is such a unique experience every time as factors like ownership history, regulatory scrutiny, available/affordable skill set and past as well as current financial success create a unique grey pattern rather than a comfortable black and white separation. This is even more obvious when I mix in recent meetings I had with North American organizations in the same sectors.

Banking in Latin vs North America

While a national bank in Latin America may seem lethargic, unimaginative and unpolished at first, you can feel the excitement when they can conceive, touch and play with the potential of new paradigms, like becoming data-driven.  Decades of public ownership did not seem to have stifled their willingness to learn and improve. On the other side, there is a stock market-listed, regional US bank and half the organization appears to believe in meddling along without expert IT knowledge, which reduced adoption and financial success in past projects.  Back office leadership also firmly believes in “relationship management” over data-driven “value management”.

To quote a leader in their finance department, “we don’t believe that knowing a few more characteristics about a client creates more profit….the account rep already knows everything about them and what they have and need”.  Then he said, “Not sure why the other departments told you there are issues.  We have all this information but it may not be rolled out to them yet or they have no license to view it to date.”  This reminded me of the “All Quiet on the Western Front” mentality.  If it is all good over here, why are most people saying it is not?  Granted; one more attribute may not tip the scale to higher profits but a few more and their historical interrelationship typically does.

tapping emerging market

“All Quiet on the Western Front” mentality?

As an example; think about the correlation of average account balance fluctuations, property sale, bill pay account payee set ups, credit card late charges and call center interactions over the course of a year.

The Latin American bankers just said, “We have no idea what we know and don’t know…but we know that even long standing relationships with corporate clients are lacking upsell execution”.  In this case, upsell potential centered on wire transfer SWIFT message transformation to their local standard they report of and back.  Understanding the SWIFT message parameters in full creates an opportunity to approach originating entities and cutting out the middleman bank.

Retailing in Africa vs Europe

The African retailer’s IT architects indicated that customer information is centralized and complete and that integration is not an issue as they have done it forever.   Also, consumer householding information is not a viable concept due to different regional interpretations, vendor information is brand specific and therefore not centrally managed and event based actions are easily handled in BizTalk.  Home delivery and pickup is in its infancy.

The only apparent improvement area is product information enrichment for an omnichannel strategy. This would involve enhancing attribution for merchandise demand planning, inventory and logistics management and marketing.  Attributes could include not only full and standardized capture of style, packaging, shipping instructions, logical groupings, WIP vs finished goods identifiers, units of measure, images and lead times but also regional cultural and climate implications.

However, data-driven retailers are increasingly becoming service and logistics companies to improve wallet share, even in emerging markets.  Look at the successful Russian eTailer Ozon, which is handling 3rd party merchandise for shipping and cash management via a combination of agency-style mom & pop shops and online capabilities.  Having good products at the lowest price alone is not cutting it anymore and it has not for a while.  Only luxury chains may be able to avoid this realization for now. Store size and location come at a premium these days. Hypermarkets are ill-equipped to deal with high-profit specialty items.  Commercial real estate vacancies on British high streets are at a high (Economist, July 13, 2014) and footfall is at a seven-year low.   The Centre for Retail Research predicts that 20% of store locations will close over the next five years.

If specialized, high-end products are the most profitable, I can (test) sell most of them online or at least through fewer, smaller stores saving on carrying cost.   If my customers can then pick them up and return them however they want (store, home) and I can reduce returns from normally 30% (per the Economist) to fewer than 10% by educating and servicing them as unbureaucratically as possible, I just won the semifinals.  If I can then personalize recommendations based on my customers’ preferences, life style events, relationships, real-time location and reward them in a meaningful way, I just won the cup.

AT Kearney "Seizing Africa's Retail Opportunities" (2014)

AT Kearney “Seizing Africa’s Retail Opportunities” (2014)

Emerging markets may seem a few years behind but companies like Amazon or Ozon have shown that first movers enjoy tremendous long-term advantages.

So what does this mean for IT?  Putting your apps into the cloud (maybe even outside your country) may seem like an easy fix.  However, it may not only create performance and legal issues but also unexpected cost to support decent SLA terms.  Does your data support transactions for higher profits today to absorb this additional cost of going into the cloud?  Focus on transactional applications and their management obfuscates the need for a strong backbone for data management, just like the one you built for your messaging and workflows ten years ago.  Then you can tether all the fancy apps to it you want.

Have any emerging markets’ war stories or trends to share?  I would love to hear them.  Stay tuned for future editions of this series.

FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services, Retail, Vertical | Tagged , , | Leave a comment

Master Data and Data Security …It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

Master Data and Data Security…It’s Not Complicated

The statement on Master Data and Data security was well intended.  I can certainly understand the angst around data security.  Especially after Target’s data breach, it is top of mind for all IT and now business executives.  But the root of the statement was flawed.  And it got me thinking about master data and data security.

“If I use master data technology to create a 360-degree view of my client and I have a data breach, then someone could steal all the information about my client.”

Um, wait, what?  Insurance companies take personally identifiable information very seriously.  The statement is flawed in the relationship between client master data and securing your client data.  Let’s dissect the statement and see what master data and data security really mean for insurers.  We’ll start by level setting a few concepts.

What is your Master Client Record?

Your master client record is your 360-degree view of your client.  It represents everything about your client.  It uses Master Data Management technology to virtually integrate and syndicate all of that data into a single view.  It leverages identifiers to ensure integrity in the view of the client record.  And finally it makes an effort through identifiers to correlate client records for a network effect.

There are benefits to understanding everything about your client.  The shape and view of each client is specific to your business.  As an insurer looks at their policyholders, the view of “client” is based on relationships and context that the client has to the insurer.  This are policies, claims, family relationships, history of activities and relationships with agency channels.

And what about security?

Naturally there is private data in a client record.  But there is nothing about the consolidated client record that contains any more or less personally identifiable information.  In fact, most of the data that a malicious party would be searching for can likely be found in just a handful of database locations.  Additionally breaches happen “on the wire”.  Policy numbers, credit card info, social security numbers, and birth dates can be found in less than five database tables.  And they can be found without a whole lot of intelligence or analysis.

That data should be secured.  That means that the data should be encrypted or masked so that any breach will protect the data.  Informatica’s data masking technology allows this data to be secured in whatever location.  It provides access control so that only the right people and applications can see the data in an unsecured format.  You could even go so far as to secure ALL of your client record data fields.  That’s a business and application choice.  Do not confuse field or database level security with a decision to NOT assemble your golden policyholder record.

What to worry about?  And what not to worry about?

Do not succumb to fear of mastering your policyholder data.  Master Data Management technology can provide a 360-degree view.  But it is only meaningful within your enterprise and applications.  The view of “client” is very contextual and coupled with your business practices, products and workflows.  Even if someone breaches your defenses and grabs data, they’re looking for the simple PII and financial data.  Then they’re grabbing it and getting out. If the attacker could see your 360-degree view of a client, they wouldn’t understand it.  So don’t over complicate the security of your golden policyholder record.  As long as you have secured the necessary data elements, you’re good to go.  The business opportunity cost of NOT mastering your policyholder data far outweighs any imagined risk to PII breach.

So what does your Master Policyholder Data allow you to do?

Imagine knowing more about your policyholders.  Let that soak in for a bit.  It feels good to think that you can make it happen.  And you can do it.  For an insurer, Master Data Management provides powerful opportunities across everything from sales, marketing, product development, claims and agency engagement.  Each channel and activity has discreet ROI.  It also has direct line impact on revenue, policyholder satisfaction and market share.  Let’s look at just a few very real examples that insurers are attempting to tackle today.

  1. For a policyholder of a certain demographic with an auto and home policy, what is the next product my agent should discuss?
  2. How many people live in a certain policyholder’s household?  Are there any upcoming teenage drivers?
  3. Does this personal lines policyholder own a small business?  Are they a candidate for a business packaged policy?
  4. What is your policyholder claims history?  What about prior carriers and network of suppliers?
  5. How many touch points have your agents and had with your policyholders?  Were they meaningful?
  6. How can you connect with you policyholders in social media settings and make an impact?
  7. What is your policyholder mobility usage and what are they doing online that might interest your Marketing team?

These are just some of the examples of very streamlined connections that you can make with your policyholders once you have your 360-degree view. Imagine the heavy lifting required to do these things without a Master Policyholder record.

Fear is the enemy of innovation.  In mastering policyholder data it is important to have two distinct work streams.  First, secure the necessary data elements using data masking technology.  Once that is secure, gain understanding through the mastering of your policyholder record.  Only then will you truly be able to take your clients’ experience to the next level.  When that happens watch your revenue grow in leaps and bounds.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Security, Financial Services, Master Data Management | Tagged , , | Leave a comment

The Ones Not Screwing Up with Compliance Will Win

A few weeks ago, a regional US bank asked me to perform some compliance and use case analysis around fixing their data management situation.  This bank prides itself on customer service and SMB focus, while using large-bank product offerings.  However, they were about a decade behind the rest of most banks in modernizing their IT infrastructure to stay operationally on top of things.

compliance

Bank Efficiency Ratio per AUM (Assets under Management), bankregdata.com

This included technologies like ESB, BPM, CRM, etc.  They also were a sub-optimal user of EDW and analytics capabilities. Having said all this; there was a commitment to change things up, which is always a needed first step to any recovery program.

THE STAKEHOLDERS

As I conducted my interviews across various departments (list below) it became very apparent that they were not suffering from data poverty (see prior post) but from lack of accessibility and use of data.

  • Compliance
  • Vendor Management & Risk
  • Commercial and Consumer Depository products
  • Credit Risk
  • HR & Compensation
  • Retail
  • Private Banking
  • Finance
  • Customer Solutions

FRESH BREEZE

This lack of use occurred across the board.  The natural reaction was to throw more bodies and more Band-Aid marts at the problem.  Users also started to operate under the assumption that it will never get better.  They just resigned themselves to mediocrity.  When some new players came into the organization from various systemically critical banks, they shook things up.

Here is a list of use cases they want to tackle:

  • The proposition of real-time offers based on customer events as simple as investment banking products for unusually high inflow of cash into a deposit account.
  • The use of all mortgage application information to understand debt/equity ratio to make relevant offers.
  • The capture of true product and customer profitability across all lines of commercial and consumer products including trust, treasury management, deposits, private banking, loans, etc.
  • The agile evaluation, creation, testing and deployment of new terms on existing and products under development by shortening the product development life cycle.
  • The reduction of wealth management advisors’ time to research clients and prospects.
  • The reduction of unclaimed use tax, insurance premiums and leases being paid on consumables, real estate and requisitions due to the incorrect status and location of the equipment.  This originated from assets no longer owned, scrapped or moved to different department, etc.
  • The more efficient reconciliation between transactional systems and finance, which often uses multiple party IDs per contract change in accounts receivable, while the operating division uses one based on a contract and its addendums.  An example would be vendor payment consolidation, to create a true supplier-spend; and thus, taking advantage of volume discounts.
  • The proactive creation of central compliance footprint (AML, 314, Suspicious Activity, CTR, etc.) allowing for quicker turnaround and fewer audit instances from MRAs (matter requiring attention).

MONEY TO BE MADE – PEOPLE TO SEE

Adding these up came to about $31 to $49 million annually in cost savings, new revenue or increased productivity for this bank with $24 billion total assets.

So now that we know there is money to be made by fixing the data of this organization, how can we realistically roll this out in an organization with many competing IT needs?

The best way to go about this is to attach any kind of data management project to a larger, business-oriented project, like CRM or EDW.  Rather than wait for these to go live without good seed data, why not feed them with better data as a key work stream within their respective project plans?

To summarize my findings I want to quote three people I interviewed.  A lady, who recently had to struggle through an OCC audit told me she believes that the banks, which can remain compliant at the lowest cost will ultimately win the end game.  Here she meant particularly tier 2 and 3 size organizations.  A gentleman from commercial banking left this statement with me, “Knowing what I know now, I would not bank with us”.  The lady from earlier also said, “We engage in spreadsheet Kung Fu”, to bring data together.

Given all this, what would you suggest?  Have you worked with an organization like this? Did you encounter any similar or different use cases in financial services institutions?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Quality, Financial Services, Governance, Risk and Compliance | Tagged , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Regulatory Compliance is an Opportunity, not a Cost!

Regulatory Compliance Regardless of the industry, new regulatory compliance requirements are more often than not treated like the introduction of a new tax.  A few may be supportive, some will see the benefits, but most will focus on the negatives – the cost, the effort, the intrusion into private matters.  There will more than likely be a lot of grumbling.

Across many industries there is currently a lot of grumbling, as new regulation seems to be springing up all over the place.  Pharmaceutical companies have to deal with IDMP in Europe and UDI in the USA.  This is hot on the heels of the US Sunshine Act, which is being followed in Europe by Aggregate Spend requirements.  Consumer Goods companies in Europe are looking at the consequences of beefed up 1169 requirements.  Financial Institutes are mulling over compliance to BCBS-239.  Behind the grumbling most organisations across all verticals appear to have a similar approach to regulatory compliance.  The pattern seems to go like this:

  1. Delay (The requirements may change)
  2. Scramble (They want it when?  Why didn’t we get more time?)
  3. Code to Spec (Provide exactly what they want, and only what they want)

No wonder these requirements are seen as purely a cost and an annoyance.  But it doesn’t have to be that way, and in fact, it should not.  Just like I have seen a pattern in response to compliance, I see a pattern in the requirements themselves:

  1. The regulators want data
  2. Their requirements will change
  3. When they do change, regulators will be wanting even more data!

Now read the last 3 bullet points again, but use ‘executives’ or ‘management’ or ‘the business people’ instead of ‘regulators’.  The pattern still holds true.  The irony is that execs will quickly sign off on budget to meet regulatory requirements, but find it hard to see the value in “infrastructure” projects.  Projects that will deliver this same data to their internal teams.

This is where the opportunity comes in.  pwc’s 2013 State of Compliance Report[i] shows that over 42% of central compliance budgets are in excess of $1m.  A significant figure.  Efforts outside of the compliance team imply a higher actual cost.  Large budgets are not surprising in multi-national companies, who often have to satisfy multiple regulators in a number of countries.  As an alternate to multiple over-lapping compliance projects, what if this significant budget was repurposed to create a flexible data management platform?  This approach could deliver compliance, but provide even more value internally.

Almost all internal teams are currently clamouring for additional data to drive ther newest application.  Pharma and CG sales & marketing teams would love ready access to detailed product information.  So would consumer and patient support staff, as well as down-stream partners.  Trading desks and client managers within Financial Institutes should really have real-time access to their risk profiles guiding daily decision making.  These data needs will not be going away.  Why should regulators be prioritised over the people who drive your bottom line and who are guardians of your brand?

A flexible data management platform will serve everyone equally. Foundational tools for a flexible data management platform exist today including Data Quality,  MDM, PIM and VIBE, Informatica’s Virtual Data Machine.  Each of them play a significant role in easing of regulatory compliance, and as a bonus they deliver measureable business value in their own right.  Implemented correctly, you will get enhanced data agility & visibility across the entire organisation as part of your compliance efforts.  Sounds like ‘Buy one Get One Free’, or BOGOF in retail terms.

Unlike taxes, BOGOF opportunities are normally embraced with open arms.  Regulatory compliance should receive a similar welcome – an opportunity to build the foundations for universal delivery of data which is safe, clean and connected.  A 2011 study by The Economist found that effective regulatory compliance benefits businesses across a wide range of performance metrics[ii].

Is it time to get your free performance boost?


[i] Deeper Insight for Greater Strategic Value, pwc 2013
[ii] Compliance and competitiveness, The Economist 2011
FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services, Governance, Risk and Compliance, Healthcare | Tagged , , , | Leave a comment

Major Financial Services Institution uses technology to improve your teller experience

Financial Services

Major Financial Services Institution uses technology to improve your teller experience

Like many American men, I judge my banking experience by the efficiency of my transaction time. However, my wife often still likes to go into the bank and see her favorite teller.

For her, banking is a bit more of a social experience. And every once in a while, my wife even drags into her bank as well.  But like many of my male counterparts, I still judge the quality of the experience by the operational efficiency of her teller. And the thing that I hate the most is when our experience at the bank is lengthened when the teller can’t do something and has to get the bank manager’s approval.

Now, a major financial institution has decided to make my life and even my wife’s life better. Using Informatica Rulepoint, they have come up with a way to improve teller operational efficiency and customer experience while actually decreasing operational business risks. Amazing!

How has this bank done this magic? They make use of the data that they have to create a better banking experience. They already capture historical transactions data and team member performance against each transaction in multiple databases. What they are doing now is using this information to make better decisions. With this information, this bank is able to create and update a risk assessment score for each team member at a branch location. And then by using Informatica Rulepoint, they have created approximately 100 rules that are able change teller’s authority based upon the new transaction, the teller’s transaction history, and the teller’s risk assessment score. This means that if my wife carefully picks the right teller, she is speed through the line without waiting for management approval.

So the message at this bank is the fastest teller is the best teller. To me this is really using data to improve  customer experience and allow for less time in a line. Maybe I should get this bank to talk next to my auto mechanic!

FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services, Real-Time | Tagged , , | Leave a comment

5 Data Challenges That Frustrate Chief Risk Officers

Frustrated Chief Risk OfficersIt has never been a more challenging time to be a Chief Risk Officer at a financial services firm. New regulations (CCAR, Basel III, BCBS 239, Solvency II, EMIR) have increased the complexity of the role. Today, risk management organizations must use precise data to measure risk, allocate capital and cover exposure. In addition, they must equip compliance groups to explain these decisions to industry regulators.

The challenges facing a Chief Risk Officer are even more daunting when the data at the heart of each decision is incomplete, inaccessible or inaccurate.  Unless the data is complete, trustworthy, timely, authoritative and auditable, success will be hard to come by.

When data issues arise, most CROs lay blame at the feet of their risk applications. Next, they blame IT and the people and processes responsible for providing data to your risk modeling and analysis applications.  However, in most situations, the issue with the data is neither the fault of the applications nor the IT groups. The reality is, most business users are unfamiliar with the root causes of data issues. More importantly, they are unfamiliar with available data and information management solutions to resolve and prevent similar issues.  The root causes for existing data issues stem from processes and tools IT development teams use to deliver data to risk management groups.  Regrettably, ongoing budget constraints, lack of capable tools, and fear of becoming obsolete have resulted in CIO leaders “throwing bodies at the problem.” This approach consumes IT worker cycles, as they manually access, transform, cleanse, validate, and deliver data into risk and compliance applications.  

So what are the data issues impacting risk management organizations today? What should your organization consider and invest in if these situations exist?  Here is a list of issues I have heard through my own conversations with risk and technology executives in the global financial markets.  The following section is to help Chief Risk Officers, VP of Risk Management, and Risk Analysts to understand the solutions that can help with their data issues.

Challenge #1:     I don’t have the right data in my risk applications, models, and analytic applications. It is either incomplete, out of date, or pain incorrect.

 Root Causes:

  • The data required to manage and monitor risk across the business originate from hundreds of systems and applications. Data volumes continue to grow every day with new systems and types of data in today’s digital landscape
  • Due to the lack of proper tools to integrate required data, IT developers manually extract data from internal and external systems which can range in the hundreds and comes in various formats
  • Raw data from source systems are transformed and validated custom coded methods using COBOL, PLSQL, JAVA, PERL, etc.

Solutions that can help:

Consider investing in industry proven data integration and data quality software designed to reduce manual extraction, transformation, and validation and streamline the process of identifying and fixing upstream data quality errors. Data Integration tools not only reduce the risk of errors, they are designed to help IT professionals streamline these complex steps, reuse transformation and data quality rules across the risk data management process to enable repeatability, consistency, and efficiencies that require less resources to support current and future data needs by risk and other parts of the business.

Challenge #2:  We do not have a comprehensive view of risk to satisfy systemic risk requirements

 Root Causes:

  • Too many silos or standalone data marts or data warehouses containing segmented views of risk information
  •  Creating a single enterprise risk data warehouse takes too long to build, too complex, too expensive, too much data to process all in one system

 Solutions that can help:

  • Data virtualization solutions can tie existing data together to deliver a consolidated view of risk for business users without having to bring that data into an existing data warehouse.
  • Long term, look at consolidating and simplifying existing data warehouses into an enterprise data warehouse leveraging high performing data processing technologies like Hadoop.

Challenge #3:  I don’t trust the data being delivered into my risk applications and modeling solutions

 Root Causes:

  • Data quality checks and validations are performed after the fact or often not at all.
  • Business believes IT is performing the required data quality checks and corrections however business lacks visibility into how IT is fixing data errors and if these errors are being addressed at all.

 Solutions that can help:

  • Data quality solutions that allow business and IT to enforce data policies and standards to ensure business applications have accurate data for modeling and reporting purposes.
  • Data quality scorecards accessible by business users to showcase the performance of ongoing data quality rules used to validate and fix data quality errors before they go into downstream risk systems. 

Challenge #4:  Unable to explain how risk is measured and reported to external regulators

Root Causes:

  • Related to IT manually managing data integration processes, organizations lack up to date, detailed, and accurate documentation of all of these processes from beginning to end.  This results in IT not able to produce data lineage reports and information resulting in audit failures, regulatory penalties, and higher capital allocations that required.
  • Lack of agreed upon documentation of business terms and definitions explaining what data is available, how is it used, and who has the domain knowledge to answer questions

Solutions that can help:

  • Metadata management solutions that can capture upstream and downstream details of what data is collected, how it is processed, where it is used, and who uses it can help solve this requirement.
  • Business glossary for data stewards and owners to manage definitions of your data and provide seamless access by business users from their desktops

Challenge #5:  Unable to identify and measure risk exposures between counterparties and securities instruments

 Root Causes:

  • No single source of the truth – Existing counterparty/legal entity master data resides in systems across traditional business silos.
  • External identifiers including the proposed Global Legal Entity Identifier will never replace identifiers across existing systems
  • Lack of insight into how each legal entity is related to each other both from a legal hierarchy standpoint and their exposure to existing securities instruments.

 Solutions that can help:

  • Master Data Management for Counterparty and Securities Master Data can help provide a single, connected, and authoritative source of counterparty information including legal hierarchy relationships and rules to identify the role and relationship between counterparties and existing securities instruments. It also eliminates business confusion of having different identifiers for the same legal entity by creating a “master” record and cross reference to existing records and identifiers for the same entity.

In summary, Chief Risk Officers and their organizations are investing to improve existing business processes, people, and business applications to satisfy industry regulations and gain better visibility into their risk conditions.  Though these are important investments, it is also critical that you invest in the technologies to ensure IT has what it needs to access and deliver comprehensive, timely, trusted, and authoritative data. 

At the same time, CIO’s can no longer afford wasting precious resources supporting manual works of art. As you take this opportunity to invest in your data strategies and requirements, it is important that both business and IT realize the importance of investing in a scalable and proven Information and Data Architecture to not only satisfy upcoming regulatory requirements but have in place a solution that meets the needs of the future across all lines of business.  Click here to learn more about informatica’s solutions for banking and capital markets.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Financial Services | Tagged , | Leave a comment