Peter Ku

Peter Ku

5 Data Challenges That Frustrate Chief Risk Officers

Frustrated Chief Risk OfficersIt has never been a more challenging time to be a Chief Risk Officer at a financial services firm. New regulations (CCAR, Basel III, BCBS 239, Solvency II, EMIR) have increased the complexity of the role. Today, risk management organizations must use precise data to measure risk, allocate capital and cover exposure. In addition, they must equip compliance groups to explain these decisions to industry regulators.

The challenges facing a Chief Risk Officer are even more daunting when the data at the heart of each decision is incomplete, inaccessible or inaccurate.  Unless the data is complete, trustworthy, timely, authoritative and auditable, success will be hard to come by.

When data issues arise, most CROs lay blame at the feet of their risk applications. Next, they blame IT and the people and processes responsible for providing data to your risk modeling and analysis applications.  However, in most situations, the issue with the data is neither the fault of the applications nor the IT groups. The reality is, most business users are unfamiliar with the root causes of data issues. More importantly, they are unfamiliar with available data and information management solutions to resolve and prevent similar issues.  The root causes for existing data issues stem from processes and tools IT development teams use to deliver data to risk management groups.  Regrettably, ongoing budget constraints, lack of capable tools, and fear of becoming obsolete have resulted in CIO leaders “throwing bodies at the problem.” This approach consumes IT worker cycles, as they manually access, transform, cleanse, validate, and deliver data into risk and compliance applications.  

So what are the data issues impacting risk management organizations today? What should your organization consider and invest in if these situations exist?  Here is a list of issues I have heard through my own conversations with risk and technology executives in the global financial markets.  The following section is to help Chief Risk Officers, VP of Risk Management, and Risk Analysts to understand the solutions that can help with their data issues.

Challenge #1:     I don’t have the right data in my risk applications, models, and analytic applications. It is either incomplete, out of date, or pain incorrect.

 Root Causes:

  • The data required to manage and monitor risk across the business originate from hundreds of systems and applications. Data volumes continue to grow every day with new systems and types of data in today’s digital landscape
  • Due to the lack of proper tools to integrate required data, IT developers manually extract data from internal and external systems which can range in the hundreds and comes in various formats
  • Raw data from source systems are transformed and validated custom coded methods using COBOL, PLSQL, JAVA, PERL, etc.

Solutions that can help:

Consider investing in industry proven data integration and data quality software designed to reduce manual extraction, transformation, and validation and streamline the process of identifying and fixing upstream data quality errors. Data Integration tools not only reduce the risk of errors, they are designed to help IT professionals streamline these complex steps, reuse transformation and data quality rules across the risk data management process to enable repeatability, consistency, and efficiencies that require less resources to support current and future data needs by risk and other parts of the business.

Challenge #2:  We do not have a comprehensive view of risk to satisfy systemic risk requirements

 Root Causes:

  • Too many silos or standalone data marts or data warehouses containing segmented views of risk information
  •  Creating a single enterprise risk data warehouse takes too long to build, too complex, too expensive, too much data to process all in one system

 Solutions that can help:

  • Data virtualization solutions can tie existing data together to deliver a consolidated view of risk for business users without having to bring that data into an existing data warehouse.
  • Long term, look at consolidating and simplifying existing data warehouses into an enterprise data warehouse leveraging high performing data processing technologies like Hadoop.

Challenge #3:  I don’t trust the data being delivered into my risk applications and modeling solutions

 Root Causes:

  • Data quality checks and validations are performed after the fact or often not at all.
  • Business believes IT is performing the required data quality checks and corrections however business lacks visibility into how IT is fixing data errors and if these errors are being addressed at all.

 Solutions that can help:

  • Data quality solutions that allow business and IT to enforce data policies and standards to ensure business applications have accurate data for modeling and reporting purposes.
  • Data quality scorecards accessible by business users to showcase the performance of ongoing data quality rules used to validate and fix data quality errors before they go into downstream risk systems. 

Challenge #4:  Unable to explain how risk is measured and reported to external regulators

Root Causes:

  • Related to IT manually managing data integration processes, organizations lack up to date, detailed, and accurate documentation of all of these processes from beginning to end.  This results in IT not able to produce data lineage reports and information resulting in audit failures, regulatory penalties, and higher capital allocations that required.
  • Lack of agreed upon documentation of business terms and definitions explaining what data is available, how is it used, and who has the domain knowledge to answer questions

Solutions that can help:

  • Metadata management solutions that can capture upstream and downstream details of what data is collected, how it is processed, where it is used, and who uses it can help solve this requirement.
  • Business glossary for data stewards and owners to manage definitions of your data and provide seamless access by business users from their desktops

Challenge #5:  Unable to identify and measure risk exposures between counterparties and securities instruments

 Root Causes:

  • No single source of the truth – Existing counterparty/legal entity master data resides in systems across traditional business silos.
  • External identifiers including the proposed Global Legal Entity Identifier will never replace identifiers across existing systems
  • Lack of insight into how each legal entity is related to each other both from a legal hierarchy standpoint and their exposure to existing securities instruments.

 Solutions that can help:

  • Master Data Management for Counterparty and Securities Master Data can help provide a single, connected, and authoritative source of counterparty information including legal hierarchy relationships and rules to identify the role and relationship between counterparties and existing securities instruments. It also eliminates business confusion of having different identifiers for the same legal entity by creating a “master” record and cross reference to existing records and identifiers for the same entity.

In summary, Chief Risk Officers and their organizations are investing to improve existing business processes, people, and business applications to satisfy industry regulations and gain better visibility into their risk conditions.  Though these are important investments, it is also critical that you invest in the technologies to ensure IT has what it needs to access and deliver comprehensive, timely, trusted, and authoritative data. 

At the same time, CIO’s can no longer afford wasting precious resources supporting manual works of art. As you take this opportunity to invest in your data strategies and requirements, it is important that both business and IT realize the importance of investing in a scalable and proven Information and Data Architecture to not only satisfy upcoming regulatory requirements but have in place a solution that meets the needs of the future across all lines of business.  Click here to learn more about informatica’s solutions for banking and capital markets.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Financial Services | Tagged , | Leave a comment

Banking and Insurance Sessions at Informatica World 2014

Informatica World 2014Financial services is one of the most data-centric industries in the world.  Clean, connected, and secure data is critical to satisfy regulatory requirements, improve customer experience, grow revenue, avoid fines, and ultimately change the world of banking and insurance. Data management improvements have been made and several of the leading companies are empowered by Informatica.

Who are these companies and what are they doing with Informatica?

To find out more, register and attend Informatica World 2014, May 12-15 at the Cosmopolitan Hotel, in Las Vegas.

Fifteen of the top financial services companies will share their stories and success leveraging Informatica for their most critical business needs. These include:

Informatica World 2014 will have over 100 breakout sessions covering a wide range of topics for Line of Business Executives, IT decision makers, Architects, Developers, and Data Administrators. Our great keynote line up includes Informatica executives Sohaib Abbasi (Chief Executive Officer), Ivan Chong (Chief Strategy Officer), Marge Breya (Chief Marketing Officer) and Anil Chakravarthy (Chief Product Officer). Our series of speakers will share Informatica’s vision for this new data-centric world and explain innovations that will propel the concept of a data platform to an entirely new level.

Register today so you don’t miss out.

We look forward to seeing you in May!

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Financial Services, Informatica World 2014 | Tagged , , , | Leave a comment

Customer Centric Financial Services

Customer Centric Finance eBookThe business of financial services is transforming before our eyes. Traditional banking and insurance products have become commoditized. As each day passes, consumers demand increasingly personalized products and services. Social and mobile channels continue to overthrow traditional communication methods. To survive and grow in this complex environment, financial institutions must do three things:

  1. Attract and retain the best customers
  2. Grow wallet share
  3. Deliver top-notch customer experience across all channels and touch points

The finance industry is traditionally either product centric or account centric. However, to succeed in the future, financial institutions must become customer centric. Becoming customer-centric requires changes to your people, process, technology, and culture. You must offer the right product or service to the right customer, at the right time, via the right channel. To achive this, you must ensure alignment between business and technology leaders. It will require targeted investments to grow the business, particularly the need to modernize legacy systems.

To become customer-centric, business executives are investing in Big Data and in legacy modernization initiatives. These investments are helping Marketing, Sales and Support organizations to:

  • Improve conversion rates on new marketing campaigns on cross-sell and up-sell activities
  • Measure customer sentiment on particular marketing and sales promotions or on the financial institution as a whole
  • Improve sales productivity ratios by targeting the right customers with the right product at the right time
  • Identify key indicators that determine and predict profitable and unprofitable customers
  • Deliver an omni-channel experience across all lines of business, devices, and locations

At Informatica, we want to help you succeed. We want you to maximize the value in these investments. For this reason, we’ve written a new eBook titled: “Potential Unlocked – Improving revenue and customer experience in financial services”. In the eBook, you will learn:

  • The role customer information plays in taking customer experience to the next level
  • Best practices for shifting account-centric operations to customer-centric operations
  • Common barriers and pitfalls to avoid
  • Key considerations and best practices for success
  • Strategies and experiences from best-in-class companies

Take a giant step toward Customer-Centricity: Download the eBook now.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Financial Services | Tagged , , | Leave a comment

Managing Risk and Compliance in Financial Services with Informatica Vibe!

Last week at Informatica World 2013, Informatica introduced Vibe, the industry’s first and only embeddable virtual data machine (VDM), designed to embed data management into the next generation of applications for the integrated information age. This unique capability offers technology for banks and insurance companies to scale and improve their data management, integration, and governance processes to manage risk and ensure ongoing compliance with a host of industry regulations from Basel III, Dodd Frank, to Solvency II. Why is Vibe unique and how does it help with risk management and regulatory compliance? 

The data required for risk and compliances originates from tens if not hundreds of systems across all lines of business including loan origination systems, loan servicing, credit card processors, deposit servicing, securities trading, brokerage, call center, online banking, and more. Not to mention external data providers for market, pricing, positions, and corporate actions information.  The volumes are greater than ever, the systems range from legacy mainframe trading systems to mobile banking applications, the formats vary across the board from structured, semi-structured, and unstructured, and a wide range of data standards must be dealt with including MISMO®, FpML®, FIX®, ACORD®, to SWIFT to name a few.  Take all that into consideration and the data administration, management, governance, and integration work required is massive, multifaceted, and fraught with risk and hidden costs often caused by custom coded processes or use of standalone tools.

The Informatica Platform and Vibe can help by allowing our customers to take advantage of ever evolving data technologies and innovations without having to recode and develop a lean data management process that turns unique works of art into reusable artifacts across the information supply chain. In other words, Vibe powers the unique “Map Once. Deploy Anywhere.” capabilities of the Informatica Platform accelerates project delivery by 5x and makes the entire data lifecycle easier to manage and eliminates the risks, costs, and short lived value associated with hand coding or using standalone tools to do this work.  Here are some examples of Vibe for risk and compliance:

  • Built data quality rules to standardize address information, remove or consolidate duplicates, translate or standardize reference data, and other critical information to calculate risk within your ETL process or as a “Data Quality Validation” service in upstream systems
  • Build rules to standardize wire transfer data to the latest SWIFT formats within your payment hubs as well as leverage the same rules in facilitating payment transactions with your counterparties.
  • Build and execute complex parsing and transformation processes leveraging the power of Hadoop to handle large volumes of structured and unstructured data to analytics and utilize the same rules in downstream credit, operational, and market risk data warehouses.
  • Define standard data masking rules once, and leverage it when using data with sensitive information for testing and develop as well as enforcing data access rights for ongoing data privacy compliance.

 The “Map Once. Deploy Anywhere.” capabilities inherent to Vibe drive:

  • Faster adoption of new technologies and data – Banks and insurance companies can take rapid advantage of new data and technologies without having to know the details of the underlying platform, or having to hire highly specialized and costly programming resources. 
  • Reduced complexity through insulation from change – When data type, volume, source, platform or users change, financial institutions can simply redeploy their existing data integration instructions without re-specification, redesign or redevelopment on a new integration technology – like Hadoop.

Vibe is NOT a new product offering. It is a unique capability that Informatica supports through our existing platform comprised of our Data Integration, Data Quality, Master Data Management, and Informatica Life Cycle Management products.  Whether it is Dodd Frank, Basel III, FATCA, or Solvency II, with Vibe, banks and insurance companies can ensure they have the right data and increase the potential to improve how they measure risk and ensure regulatory compliance. Visit Informatica’s Banking/Capital Markets and Insurance industry solutions section of our website for more information on how we help today’s global financial services industry.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Governance, Data Integration, Data Integration Platform, Data Quality, Financial Services | Tagged , | Leave a comment

Financial Services Sessions at Informatica World 2013

Data is one of the most important and value assets to banks and insurance companies across the globe to help comply with industry regulations, improve customer experience, find new revenue opportunities, and reduce the cost of doing business. These are universal needs and challenges and Informatica’s industry leading solutions have helped over 780 financial services institutions increase their potential to achieve business success.

At Informatica World 2013, June 4-7 at the Aria Resort and Casino in Las Vegas, Nevada, we will be showcasing a wealth of valuable information to maximize value from your data assets and technology investments. The event includes over 100 interactive and informative breakout sessions across 6 dedicated tracks on (Platform & Products, Architecture, Best Practices, Big Data, Hybrid IT and Tech Talk).

There will also be a financial services path including guest speakers from the banking and insurance industry and from our Financial Services experts including:

  • Morgan Stanley Wealth Management: Accelerating Business Growth While Protecting Sensitive Data: Find how Morgan Stanley built one of the largest Informatica platforms to mask and process over 150 thousand objects used by more than 1,000 applications globally and comply with today’s data privacy regulations.
  • Wells Fargo Bank’s Data Governance Journey with Informatica: Hear and learn about Wells Fargo’s data governance strategy, program, and how Informatica is used to deliver actionable, transparent, and trusted data to the business.
  • Liberty Mutual Insurance:  Architecture and Best Practices with Informatica Data Integration:  Learn how Informatica Data Integration’s metadata-driven architecture helps scale and support large data volumes and meet enterprise Liberty Mutual’s demands for performance and compliance.
  • Addressing Top Business Priorities in Banking and Insurance with MDM: Peter Ku, Senior Director of Financial Services Industry solutions share how Master Data Management is being used in Banking and Insurance to help address top business imperatives from regulatory compliance to finding new revenue opportunities. 

Register today at www.informaticaworld.com and I look forward to seeing you there!

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Financial Services | Tagged , , | Leave a comment

Bankers, Insurers – How Customer Centric Are You?

The need to be more customer-centric in financial services is more important than ever as banks and insurance companies look for ways to reduce churn as those in the industry know that loyal customers spend more on higher margin products and are likely to refer additional customers. Bankers and insurers who understand this, and get this right, are in a better position to maintain profitable and lasting customer loyalty and reap significant financial rewards. The current market conditions remain significant and will be difficult to overcome without the right information management architecture to help companies be truly customer centric. Here’s why:

  • Customer satisfaction with retail banks has decreased for four consecutive years, with particularly low scores in customer service.[1] Thirty-seven percent of customers who switched primary relationships cited in an industry survey showed poor customer service as the main reasons.
  • The commoditization of traditional banking and insurance products has rapidly increased client attrition and decreased acquisition rates. Industry reports estimate that banks are losing customers at an average rate of 12.5% per year, while average acquisition rates are at 13.5%, making acquisitions nearly a zero-sum game. Further, the cost of acquiring new customers is estimated at five times the rate of retaining existing ones.
  • Switching is easier than ever before. Customer churn is at an all-time high in most European countries. According to an industry survey, 42 percent of German banking customers had been with their main bank for less than a year. As customer acquisition costs running between of €200 to €400, bankers and insurers need to keep their clients at least 5 to 7 years to simply break even.
  • Mergers and acquisitions impact even further the complexity and risks of maintaining customer relationships. According to a recent study, 17 percent of respondents who had gone through a merger or acquisition had switched at least one of their accounts to another institution after their bank was acquired, while an additional 31 percent said they were at least somewhat likely to switch over the next year.[2]

Financial services professionals have long recognized the need to manage customer relationships vs. account relationships by shifting away from a product-centric culture toward a customer-centric model to maintain client loyalty and grow their bottom lines organically. Here are some reasons why:

  • A 5% increase in customer retention can increase profitability by 35% in banking, 50% in brokerage, and 125% in the consumer credit card market.[3]
  • Banks can add more than $1 million to the profitability of their commercial banking business line by simply extending 16 of these large corporate relationships by one year, or by saving two such clients from defecting. In the insurance sector, a one percent increase in customer retention results in $1M in revenue.
  • The average company has between a 60% and 70% probability of success selling more services to a current customer, a 20% to 40% probability of selling to a former customer, and a 5% to 20% probability of making a sale to a prospect.[4]
  • Up to 66% of current users of financial institutions’ social media sites engage in receiving information about financial services, 32% use it to retrieve information about offers or promotions and 30% to conduct customer service related activities.[5]

So what does it take to become more Customer-centric?

Companies who have successful customer centric business models share similar cultures of placing the customer first, people who are willing to go that extra mile, business processes designed with the customer’s needs in mind, product and marketing strategy that is designed to meet a customer’s needs, and technology solutions that helps access and deliver trusted, timely, and comprehensive information and intelligence across the business. These technologies include

Why is data integration important? Customer centricity begins with the ability to access and integrate your data regardless of format, source system, structure, volume, latency, from any location including the cloud and social media sites. The data business needs originates from many different systems across the organization and outside including new Software as a Service solutions and cloud based technologies. Traditional hand coded methods and one off tools and open source data integration tools are not able to scale and perform to effectively and efficiently access, manage, and deliver the right data to the systems and applications in the front lined. A the same time, we live in the Big Data era with increasing transaction volumes, new channel adoption including mobile devices and social media combined generating petabytes of data of which to support a capable and sustainable customer centric business model, requires technology that can handle this complexity, scale with the business, while reducing costs and improving productivity.

Data quality issues must be dealt with proactively and managed by both business and technology stakeholders.  Though technology itself cannot prevent all data quality errors from happening, it is a critical part of your customer information management process to ensure any issues that exist are identified and dealt with in an expeditious manner. Specifically, a Data Quality solution that can help detect data quality errors in any source, allow business users to define data quality rules, support seamless consumption of those rules by developers to execute, dashboards and reports for business stakeholders, and ongoing quality monitoring to deal with time and business sensitive exceptions. Data quality management can only scale and deliver value if an organization believes and manages data as an asset. It also helps to have a data governance framework consisting of processes, policies, standards, and people from business and IT working together in the process.

Lastly, growing your business, improving wallet share, retaining profitable relationships, and lowering the cost of managing customer relationships requires a single, trusted, holistic, and authoritative source of customer information.  Managing customer information has historically been in applications across traditional business silos that lacked any common processes to reconcile duplicate and conflicting information across business systems.  Master Data Management solutions are purposely designed to help breakdown the traditional application and business silos and helps deliver that single view of the truth for all systems to benefit.  Master Data Management allows banks and insurance companies to access, identity unique customer entities, relate accounts to each customer, and extend that relationship view across other customers and employees including relationship bankers, financial advisors, to existing agents and brokers.

The need to attract and retain customers is a continuous journey for the financial industry however that need is greater than ever before. The foundation for successful customer centricity requires technology that can help access and deliver trusted, timely, consistent, and comprehensive customer information and insight across all channels and avoid the mistakes of the past, allow you to stay ahead of your competition, and maximize value for your shareholders.

[1] 2010 UK Retail Banking Satisfaction Study, J.D. Power and Associates, October 2010.

FacebookTwitterLinkedInEmailPrintShare
Posted in Customer Acquisition & Retention, Data Governance, Data Integration, Data Quality, Financial Services, Master Data Management, Vertical | Tagged , , , , , , , | Leave a comment

Maximize the Potential Business Value from New Core Banking/Insurance Application Investments

 

According to the IDC Financial Insights 2013 Predictions report, financial institutions across most regions are getting serious about updating their legacy systems to improve reduce operating costs, automate labor intensive processes, improve customer experiences, and avoid costly disruptions. Transforming a bank’s core systems or insurance provider’s main business systems is a strategic decision that has far-reaching implications on the firm’s future business strategies and success. When done right, the capabilities offered in today’s modern banking and insurance platforms can propel a company in front of their competition or be the nail in the coffin if your data is not migrated correctly, safeguards are not in place to protect against unwanted data breaches, and if you are not able to decommission those old systems as planned.  

One of the most important and critical phases of any legacy modernization project is the process of migrating data from old to new.  Migrating data involves:

  • Ability to access existing data in the legacy systems
  • Understand the data structures that need to be migrated
  • Transform and execute one-to-one mapping with the relevant fields in the new system
  • Identify data quality errors and other gaps in the data
  • Validate what is entered into the new system by identifying transformation or mapping errors
  • Seamlessly connect to the target tables and fields in the new system

Sounds easy enough right?  Not so fast! (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Data Archiving, Data Governance, Data Integration, Data Quality, Financial Services | Tagged , , , , , | 1 Comment

Key Data Challenges to Overcome for FATCA Compliance

While Dodd Frank received most of the media attention after the great financial crisis, during that period, the U.S. government signed into law the Foreign Account Tax Compliance Act (FATCA) back in March 2010 which will require Foreign Financial Institutions (FFIs) to report the names of U.S. persons and owners of companies who have bank accounts in foreign accounts for tax reporting and withholding purposes.

The law was set to go into effect on January 1, 2013 however on October 24, 2012, the U.S. Internal Revenue Service (IRS) announced a one year extension to January 1, 2014 to give FFIs more time implement procedures for meeting the FATCA reporting requirements. Banks who elect not to comply or fail to meet these deadlines will be tagged as a ‘non-participating FFI’ and subject to a 30% withholding tax on all U.S. sourced income paid to it by a U.S. financial institution. Ouch!!

The reasons for FATCA are fairly straight forward. The United States Internal Revenue Service (IRS) wants to collect its share of tax revenue from individuals who have financial accounts and assets in overseas banks. According to industry studies, it is estimated that of the seven million U.S. citizens and green card holders who live or work outside the U.S., less than seven percent file tax returns. Officially, the intention of FATCA is not to raise additional tax revenue but to trace its missing, non-compliant taxpayers and return them to the U.S. tax system. Once FATCA goes into effect, the IRS expects it will collect an additional $8.7 billion in tax revenue.

Satisfying FATCA reporting requirements will require banks to identify:

  • Any customer who may have an existing U.S. tax status.
  • Customers who hold a U.S. citizenship or green card.
  • Country of birth and residency.
  • U.S.-based addresses associated with accounts – incoming and outgoing payments.
  • Customers who have re-occurring payments to the U.S. including electronic transfers and recipient banks located in the U.S.
  • Customers who have payments coming from the U.S. to banks abroad.
  • Customers with high balances across retail banking, wealth management, asset management, Investment and Commercial Banking business lines.

Although these requirements sound simple enough, there are many data challenges to overcome including:

  • Access to account information from core banking systems, customer management and relationship systems, payment systems, databases and desktops across multiple lines of business which can range into the hundreds, if not thousands of individual data sources.
  • Data varying in different formats and structures including unstructured documents such as scanned images, PDFs, etc.
  • Data quality errors including:
  • Incomplete records: Data that is missing or unusable from the source system or file yet required for FATCA identification.
  • Non-conforming record types: Data that is available in a non-standard format that does not integrate with data from other systems.
  • Inconsistent values: Data values that give conflicting information or have different definitions with similar values.
  • Inaccuracy: Data that is incorrect or out of date.
  • Duplicates: Data records or attributes are repeated.
  • Lack of Integrity: Data that is missing or not referenced in any system.

Most modern core banking systems have built in data validation checks to ensure that the right values are entered. Unfortunately, many banks continue to operate 20-30 year-old systems, many of which were custom built and lack upstream validation capabilities. In many cases, these data errors arise when combining ‘like’ data and information from multiple systems. Given the number of data sources and the volume of data that banks deal with, it will be important for FFIs to have capable technology to expedite and accurately profile FATCA source data to identify errors at the source as well as errors that occur as data is being combined and transformed for reporting purposes.

Another data quality challenge facing FFI’s will be to identify unique account holders while dealing with the following data anomalies:

  • Deciphering names across different language (山田太郎 vs. Taro Yamada)
  • Use of Nicknames (e.g. John, Jonathan, Johnny)
  • Concatenation (e.g. Mary Anne vs. Maryanne)
  • Prefix / Suffix (e.g. MacDonald vs. McDonald)
  • Spelling error (e.g. Potter vs. Porter)
  • Typographical error (e.g. Beth vs. Beht)
  • Transcription error (e.g. Hannah vs. Hamah)
  • Localization (e.g. Stanislav Milosovich vs. Stan Milo)
  • Phonetic variations (e.g. Edinburgh – Edinborough)
  • Transliteration (e.g. Kang vs. Kwang)

 Attempting to perform these intricate data validations and matching processes requires technology that is purposely built for this function. Specifically, identity matching and resolution technology that leverages proven probabilistic, deterministic and fuzzy matching algorithms against any data of any language, capable of processing large data sets in a timely manner and that is designed to be used by business analysts versus an IT developer. Most importantly, being able to deliver the end results into the bank’s FATCA reporting systems and applications where the business needs it most.

As I stated earlier, FATCA impacts both U.S. and non-U.S. banks and is as important for the U.S. tax collectors as well as to the health of the global financial and economic markets. Even with the extended deadlines, those who lack capable data quality management processes, policies, standards and enabling technologies to deal with these data quality issues must act now or face the penalties defined by Uncle Sam.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality, Financial Services, Governance, Risk and Compliance, Identity Resolution | Tagged , , , , | Leave a comment

Reliable, Trusted, and Accurate Data is More Important for Insurance Companies Post-Hurricane Sandy

Like most Americans last week, I was glued to the news several days prior to Hurricane Sandy hitting landfall on the East Coast of the United States, hoping it would pass with minimal damage. Having lived in Hawaii and Florida for most of my life, I personally experienced three hurricanes and know how devastating these natural disasters can be during the storm and the hardships people go through afterwards. My thoughts are with all those who lost their lives and their belongings due to this disaster.

Hurricane Sandy has been described as one of the largest storms both in size and in property damage to homes and businesses. According to the New York Times, the total economic damage from Hurricane Sandy will range between $10 to $20 billion with insurance companies paying for $5 to $10 billion in insurance claims. At the high end of that range, Sandy would become the third-most expensive storm for insurers in U.S. history. As property, casualty and flood insurance companies prepare to face a significant wave of calls and claims requests from policyholders, I wonder what the implications and costs will be for these companies who lack reliable, trusted and accurate data which has plagued the industry industry for years.

Reliable, trusted, and accurate data is critical in helping insurance companies manage their business from satisfying regulatory requirements, maintaining and growing customer relationships, combating fraud, to reducing the cost of doing business. Unfortunately, many insurance companies, large and small, have long operated on paper-based processes to onboard new customers, manage policy changes and process claim requests. Though some firms have invested in data quality and governance practices in recent years, the majority of today’s insurance industry has ignored the importance of managing and governing good quality data and dealing with the root causes to bad data including:

  • Inadequate verification of data stored in legacy systems
  • Non-validated data leaks and data entry errors made by human beings
  • Inadequate or manual integration of data between systems
  • Redundant data sources/stores that cause data corruption to dependent applications
  • Direct back-end updates with little to no data verification and impact analysis

Because of this, the data in core insurance systems can contain serious data quality errors including:

  • Invalid property addresses
  • Policyholder contact details (Name, Address, Phone numbers)
  • Policy codes and descriptions (e.g. motor or home property)
  • Risk rating codes
  • Flood zone information
  • Property assessment values and codes
  • Loss ratios
  • Claims adjuster estimates and contact information
  • Lack of a comprehensive view of existing policyholder information across different policy coverage categories and lines of business

The cost of bad data can be measured in the following areas as firms gear up to deal with the fallout of Hurricane Sandy:

  • Number of claims errors multiplied by the time and cost to resolve these errors
  • Number of phone calls and emails concerning claims processing delays multiplied by the time per phone call and the cost per Customer Service Rep or field agents handling those requests
  • Number of fraudulent claims and the loss of funds from those criminal activities
  • Number of policy cancellations caused by poor customer service experienced by existing policy holders
  • Not to mention the reputational damage caused by poor customer service

Having a sound data quality practice requires a well-defined data governance framework consisting of the following elements:

  • Data quality policies that spell out what data are required, how they should be used, managed, updated and retired. More importantly, these policies should be aligned to the company’s goals, defined and maintained by the business, not IT.
  • Data quality processes that involve documented steps to implement and enforce the policies described above.
  • Specific roles including data stewards that represent business organizations, core systems (i.e. Underwriting Data Steward), or Data Category stewards who understand the business definition, requirements and usage of key data assets by the business.

Finally, in addition to the points listed above, firms must not discount or ignore the importance of having industry leading data quality software solutions to enable an effective and sustainable data quality practice including:

  • Data profiling and auditing to identify existing data errors in source systems, during data entry processes and as data is extracted and shared between systems.
  • Data quality and cleansing to build and execute data quality rules to enforce the policies set forth by the business.
  • Address Validation solutions to ensure accurate address information for flood zone mapping and loss analysis
  • Data Quality dashboards and monitoring solutions to analyze the performance and quality levels of data and escalate data errors that require immediate attention.

As cleanup activities progress and people get back on their feet from Hurricane Sandy, insurance companies should take the time to measure how well they are managing their data quality challenges and start looking at addressing them in preparation for these inevitable events caused by Mother Nature.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality, Financial Services, Vertical | Tagged , , , , | Leave a comment

Financial Stability Board Pushes Legal Entity Identifier to the G20– Vote Expected this Month – What’s Next?

Hot off the press! The Financial Stability Board (FSB) published today (June 8th, 2012) a report entitled “A Global Legal Entity Identifier for Financial Markets”  for the G20 supervisors for consideration and response to the mandate issued by the G20 at the Cannes Summit for a final vote at the end of the month in Mexico. It sets out 35 recommendations for the development and implementation of the global LEI system. These recommendations are guided by a set of “High Level Principles” which outline the objectives that a global LEI system should meet.

The proposed global Legal Entity Identifier (LEI) is expected to help regulators identify unique counterparties across the financial system and monitor the impact of risky counterparties holding positions with the banks. Assuming LEI is approved by the G20 this month, it will be the first of these infrastructure standards to be implemented globally requiring firms to integrate, reconcile and cross-reference the new LEI with existing counterparty identifiers and information, as well as manage accurate and current legal hierarchies.   (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services, Governance, Risk and Compliance, Master Data Management, Vertical | Tagged , , , , , , , | Leave a comment