Category Archives: Banking & Capital Markets

If Data Projects Weather, Why Not Corporate Revenue?

Every fall Informatica sales leadership puts together its strategy for the following year.  The revenue target is typically a function of the number of sellers, the addressable market size and key accounts in a given territory, average spend and conversion rate given prior years’ experience, etc.  This straight forward math has not changed in probably decades, but it assumes that the underlying data are 100% correct. This data includes:

  • Number of accounts with a decision-making location in a territory
  • Related IT spend and prioritization
  • Organizational characteristics like legal ownership, industry code, credit score, annual report figures, etc.
  • Key contacts, roles and sentiment
  • Prior interaction (campaign response, etc.) and transaction (quotes, orders, payments, products, etc.) history with the firm

Every organization, no matter if it is a life insurer, a pharmaceutical manufacturer, a fashion retailer or a construction company knows this math and plans on getting somewhere above 85% achievement of the resulting target.  Office locations, support infrastructure spend, compensation and hiring plans are based on this and communicated.

data revenue

We Are Not Modeling the Global Climate Here

So why is it that when it is an open secret that the underlying data is far from perfect (accurate, current and useful) and corrupts outcomes, too few believe that fixing it has any revenue impact?  After all, we are not projecting the climate for the next hundred years here with a thousand plus variables.

If corporate hierarchies are incorrect, your spend projections based on incorrect territory targets, credit terms and discount strategy will be off.  If every client touch point does not have a complete picture of cross-departmental purchases and campaign responses, your customer acquisition cost will be too high as you will contact the wrong prospects with irrelevant offers.  If billing, tax or product codes are incorrect, your billing will be off.  This is a classic telecommunication example worth millions every month.  If your equipment location and configuration is wrong, maintenance schedules will be incorrect and every hour of production interruption will cost an industrial manufacturer of wood pellets or oil millions.

Also, if industry leaders enjoy an upsell ratio of 17%, and you experience 3%, data (assuming you have no formal upsell policy as it violates your independent middleman relationship) data will have a lot to do with it.

The challenge is not the fact that data can create revenue improvements but how much given the other factors: people and process.

Every industry laggard can identify a few FTEs who spend 25% of their time putting one-off data repositories together for some compliance, M&A customer or marketing analytics.  Organic revenue growth from net-new or previously unrealized revenue is what the focus of any data management initiative should be.  Don’t get me wrong; purposeful recruitment (people), comp plans and training (processes) are important as well.  Few people doubt that people and process drives revenue growth.  However, few believe data being fed into these processes has an impact.

This is a head scratcher for me. An IT manager at a US upstream oil firm once told me that it would be ludicrous to think data has a revenue impact.  They just fixed data because it is important so his consumers would know where all the wells are and which ones made a good profit.  Isn’t that assuming data drives production revenue? (Rhetorical question)

A CFO at a smaller retail bank said during a call that his account managers know their clients’ needs and history. There is nothing more good data can add in terms of value.  And this happened after twenty other folks at his bank including his own team delivered more than ten use cases, of which three were based on revenue.

Hard cost (materials and FTE) reduction is easy, cost avoidance a leap of faith to a degree but revenue is not any less concrete; otherwise, why not just throw the dice and see how the revenue will look like next year without a central customer database?  Let every department have each account executive get their own data, structure it the way they want and put it on paper and make hard copies for distribution to HQ.  This is not about paper versus electronic but the inability to reconcile data from many sources on paper, which is a step above electronic.

Have you ever heard of any organization move back to the Fifties and compete today?  That would be a fun exercise.  Thoughts, suggestions – I would be glad to hear them?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Master Data Management, Product Information Management | Tagged , | 1 Comment

Has Hadoop Crossed The Chasm? Thoughts About Strata 2014

Well, it’s been a little over a week since the Strata conference so I thought I should give some perspective on what I learned.  I think it was summed up at my first meeting, on the first morning of the conference. The meeting was with a financial services company who has significance experience with Hadoop. The first words out of their mouths were, “Hadoop is hard.” 

Later in the conference, after a Western Union representative spoke about their Hadoop deployment, they were mobbed by end user questions and comments. The audience was thrilled to hear about an actual operational deployment: Not just a sandbox deployment, but an actual operational Hadoop deployment from a company that is over 160 years old.

The market is crossing the chasm from early adopters who love to hand code (and the macho culture of proving they can do the hard stuff) to more mainstream companies that want to use technology to solve real problems. These mainstream companies aren’t afraid to admit that it is still hard. For the early adopters, nothing is ever hard. They love hard. But the mainstream market doesn’t view it that way.  They don’t want to mess around in the bowels of enabling technology.  They want to use the technology to solve real problems.  The comment from the financial services company represents the perspective of the vast majority of organizations. It is a sign Hadoop is hitting the mainstream market.

More proof we have moved to a new phase?  Cloudera announced they were going from shipping six versions a year down to just three.  I have been saying for awhile that we will know that Hadoop is real when the distribution vendors stop shipping every 2 months and go to a more typical enterprise software release schedule.  It isn’t that Hadoop engineering efforts have slowed down.  It is still evolving very rapidly.  It is just that real customers are telling the Hadoop suppliers that they won’t upgrade as fast because they have real business projects running and they can’t do it.  So for those of you who are disappointed by the “slow down,” don’t be.  To me, this is news that Hadoop is reaching critical mass.

Technology is closing the gap to allow organizations to use Hadoop as a platform without having to actually have an army of Hadoop experts.  That is what Informatica does for data parsing, data integration,  data quality and data lineage (recent product announcement).  In fact, the number one demo at the Informatica booth at Strata was the demonstration of “end to end” data lineage for data, going from the original source all the way to how it was loaded and then transformed within Hadoop.  This is purely an enterprise-class capability that becomes more interesting and important when you actually go into true production.

Informatica’s goal is to hide the complexity of Hadoop so companies can get on with the work of using the platform with the skills they already have in house.  And from what I saw from all of the start-up companies that were doing similar things for data exploration and analytics and all the talk around the need for governance, we are finally hitting the early majority of the market.  So, for those of you who still drop down to the underlying UNIX OS that powers a Mac, the rest of us will keep using the GUI.   To the extent that there are “fit for purpose” GUIs on top of Hadoop, the technology will get used by a much larger market.

So congratulations Hadoop, you have officially crossed the chasm!

P.S. See me on theCUBE talking about a similar topic at: youtu.be/oC0_5u_0h2Q

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data, Hadoop, Informatica Events | Tagged , , , | Leave a comment

BCBS 239 – What Are Banks Talking About?

I recently participated on an EDM Council panel on BCBS 239 earlier this month in London and New York. The panel consisted of Chief Risk Officers, Chief Data Officers, and information management experts from the financial industry. BCBS 239 set out 14 key principles requiring banks aggregate their risk data to allow banking regulators to avoid another 2008 crisis, with a deadline of Jan 1, 2016.  Earlier this year, the Basel Committee on Banking Supervision released the findings from a self-assessment from the Globally Systemically Important Banks (GISB’s) in their readiness to 11 out of the 14 principles related to BCBS 239. 

Given all of the investments made by the banking industry to improve data management and governance practices to improve ongoing risk measurement and management, I was expecting to hear signs of significant process. Unfortunately, there is still much work to be done to satisfy BCBS 239 as evidenced from my findings. Here is what we discussed in London and New York.

  • It was clear that the “Data Agenda” has shifted quite considerably from IT to the Business as evidenced by the number of risk, compliance, and data governance executives in the room.  Though it’s a good sign that business is taking more ownership of data requirements, there was limited discussions on the importance of having capable data management technology, infrastructure, and architecture to support a successful data governance practice. Specifically capable data integration, data quality and validation, master and reference data management, metadata to support data lineage and transparency, and business glossary and data ontology solutions to govern the terms and definitions of required data across the enterprise.
  • With regard to accessing, aggregating, and streamlining the delivery of risk data from disparate systems across the enterprise and simplifying the complexity that exists today from point to point integrations accessing the same data from the same systems over and over again creating points of failure and increasing the maintenance costs of supporting the current state.  The idea of replacing those point to point integrations via a centralized, scalable, and flexible data hub approach was clearly recognized as a need however, difficult to envision given the enormous work to modernize the current state.
  • Data accuracy and integrity continues to be a concern to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Many in the room acknowledged heavy reliance on manual methods implemented over the years and investing in Automating data integration and onboarding risk data from disparate systems across the enterprise is important as part of Principle 3 however, much of what’s in place today was built as one off projects against the same systems accessing the same data delivering it to hundreds if not thousands of downstream applications in an inconsistent and costly way.
  • Data transparency and auditability was a popular conversation point in the room as the need to provide comprehensive data lineage reports to help explain how data is captured, from where, how it’s transformed, and used remains a concern despite advancements in technical metadata solutions that are not integrated with their existing risk management data infrastructure
  • Lastly, big concerns regarding the ability to capture and aggregate all material risk data across the banking group to deliver data by business line, legal entity, asset type, industry, region and other groupings, to support identifying and reporting risk exposures, concentrations and emerging risks.  This master and reference data challenge unfortunately cannot be solved by external data utility providers due to the fact the banks have legal entity, client, counterparty, and securities instrument data residing in existing systems that require the ability to cross reference any external identifier for consistent reporting and risk measurement.

To sum it up, most banks admit they have a lot of work to do. Specifically, they must work to address gaps across their data governance and technology infrastructure.BCBS 239 is the latest and biggest data challenge facing the banking industry and not just for the GSIB’s but also for the next level down as mid-size firms will also be required to provide similar transparency to regional regulators who are adopting BCBS 239 as a framework for their local markets.  BCBS 239 is not just a deadline but the principles set forth are a key requirement for banks to ensure they have the right data to manage risk and ensure transparency to industry regulators to monitor system risk across the global markets. How ready are you?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Aggregation, Data Governance, Data Services | Tagged , , , | Leave a comment

The Ones Not Screwing Up with Compliance Will Win

A few weeks ago, a regional US bank asked me to perform some compliance and use case analysis around fixing their data management situation.  This bank prides itself on customer service and SMB focus, while using large-bank product offerings.  However, they were about a decade behind the rest of most banks in modernizing their IT infrastructure to stay operationally on top of things.

compliance

Bank Efficiency Ratio per AUM (Assets under Management), bankregdata.com

This included technologies like ESB, BPM, CRM, etc.  They also were a sub-optimal user of EDW and analytics capabilities. Having said all this; there was a commitment to change things up, which is always a needed first step to any recovery program.

THE STAKEHOLDERS

As I conducted my interviews across various departments (list below) it became very apparent that they were not suffering from data poverty (see prior post) but from lack of accessibility and use of data.

  • Compliance
  • Vendor Management & Risk
  • Commercial and Consumer Depository products
  • Credit Risk
  • HR & Compensation
  • Retail
  • Private Banking
  • Finance
  • Customer Solutions

FRESH BREEZE

This lack of use occurred across the board.  The natural reaction was to throw more bodies and more Band-Aid marts at the problem.  Users also started to operate under the assumption that it will never get better.  They just resigned themselves to mediocrity.  When some new players came into the organization from various systemically critical banks, they shook things up.

Here is a list of use cases they want to tackle:

  • The proposition of real-time offers based on customer events as simple as investment banking products for unusually high inflow of cash into a deposit account.
  • The use of all mortgage application information to understand debt/equity ratio to make relevant offers.
  • The capture of true product and customer profitability across all lines of commercial and consumer products including trust, treasury management, deposits, private banking, loans, etc.
  • The agile evaluation, creation, testing and deployment of new terms on existing and products under development by shortening the product development life cycle.
  • The reduction of wealth management advisors’ time to research clients and prospects.
  • The reduction of unclaimed use tax, insurance premiums and leases being paid on consumables, real estate and requisitions due to the incorrect status and location of the equipment.  This originated from assets no longer owned, scrapped or moved to different department, etc.
  • The more efficient reconciliation between transactional systems and finance, which often uses multiple party IDs per contract change in accounts receivable, while the operating division uses one based on a contract and its addendums.  An example would be vendor payment consolidation, to create a true supplier-spend; and thus, taking advantage of volume discounts.
  • The proactive creation of central compliance footprint (AML, 314, Suspicious Activity, CTR, etc.) allowing for quicker turnaround and fewer audit instances from MRAs (matter requiring attention).

MONEY TO BE MADE – PEOPLE TO SEE

Adding these up came to about $31 to $49 million annually in cost savings, new revenue or increased productivity for this bank with $24 billion total assets.

So now that we know there is money to be made by fixing the data of this organization, how can we realistically roll this out in an organization with many competing IT needs?

The best way to go about this is to attach any kind of data management project to a larger, business-oriented project, like CRM or EDW.  Rather than wait for these to go live without good seed data, why not feed them with better data as a key work stream within their respective project plans?

To summarize my findings I want to quote three people I interviewed.  A lady, who recently had to struggle through an OCC audit told me she believes that the banks, which can remain compliant at the lowest cost will ultimately win the end game.  Here she meant particularly tier 2 and 3 size organizations.  A gentleman from commercial banking left this statement with me, “Knowing what I know now, I would not bank with us”.  The lady from earlier also said, “We engage in spreadsheet Kung Fu”, to bring data together.

Given all this, what would you suggest?  Have you worked with an organization like this? Did you encounter any similar or different use cases in financial services institutions?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Quality, Financial Services, Governance, Risk and Compliance | Tagged , , , | Leave a comment

Customer Centric Financial Services

Customer Centric Finance eBookThe business of financial services is transforming before our eyes. Traditional banking and insurance products have become commoditized. As each day passes, consumers demand increasingly personalized products and services. Social and mobile channels continue to overthrow traditional communication methods. To survive and grow in this complex environment, financial institutions must do three things:

  1. Attract and retain the best customers
  2. Grow wallet share
  3. Deliver top-notch customer experience across all channels and touch points

The finance industry is traditionally either product centric or account centric. However, to succeed in the future, financial institutions must become customer centric. Becoming customer-centric requires changes to your people, process, technology, and culture. You must offer the right product or service to the right customer, at the right time, via the right channel. To achive this, you must ensure alignment between business and technology leaders. It will require targeted investments to grow the business, particularly the need to modernize legacy systems.

To become customer-centric, business executives are investing in Big Data and in legacy modernization initiatives. These investments are helping Marketing, Sales and Support organizations to:

  • Improve conversion rates on new marketing campaigns on cross-sell and up-sell activities
  • Measure customer sentiment on particular marketing and sales promotions or on the financial institution as a whole
  • Improve sales productivity ratios by targeting the right customers with the right product at the right time
  • Identify key indicators that determine and predict profitable and unprofitable customers
  • Deliver an omni-channel experience across all lines of business, devices, and locations

At Informatica, we want to help you succeed. We want you to maximize the value in these investments. For this reason, we’ve written a new eBook titled: “Potential Unlocked – Improving revenue and customer experience in financial services”. In the eBook, you will learn:

  • The role customer information plays in taking customer experience to the next level
  • Best practices for shifting account-centric operations to customer-centric operations
  • Common barriers and pitfalls to avoid
  • Key considerations and best practices for success
  • Strategies and experiences from best-in-class companies

Take a giant step toward Customer-Centricity: Download the eBook now.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Financial Services | Tagged , , | Leave a comment

Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money

I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it.  Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it.  This coincided with a meeting I had with a European telco’s enterprise architecture team.  There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).

Imagine the cost of other bad data (thecodeproject.com)

Imagine the cost of other bad data (thecodeproject.com)

In my case, I  moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.

For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:

  • In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
  • My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
  • My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
  • My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
  • A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
  • A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
  • My legacy operator turns on the wrong premium channels on half my set-top boxes.
  • The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail).  When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.

Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously.  And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!

One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years.  If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.

However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker.  They had a double whammy in store for me.  First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence.  Then, my new residence’s HOI advised they added my old home’s HOI to my account.

After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.

It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability.  It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month.  Yes, that also happened to me…many years ago.  Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.

We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.

Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP),  free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically.  If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”.  If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.

Tell me about one of your “data-driven” horror scenarios?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, Business/IT Collaboration, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Aggregation, Data Governance, Data Privacy, Data Quality, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Healthcare, Master Data Management, Retail, Telecommunications, Uncategorized, Vertical | Tagged , , , , , , , , , | Leave a comment

Open Source, Next Generation Data Encoding

Today is an exciting day for technology in high performance electronic trading. By the time you read this, the CME Group, Real Logic Ltd., and Informatica will have announced a new open source initiative. I’ve been collaborating on this work for a few months and I feel it is some great technology. I hope you will agree.

Simple Binary Encoding (SBE) is an encoding for FIX that is being developed by the FIX protocol community as part of their High Performance Working Group. The goal is to produce a binary encoding representation suitable for low-latency financial trading. The CME Group, Real Logic, and Informatica have sponsored the development of an open source implementation of an early version of the SBE specification undertaken by Martin Thompson (of Real Logic, formerly of LMAX) and myself, Todd Montgomery (of Informatica). The implementation methodology has been a very high performance encoding/decoding mechanism for data layout that is tailored to not just high performance application demands in low-latency trading. But has implications for all manner of serialization and marshaling in use cases from Big Data analytics to device data capture.

Financial institutions, and other businesses, need to serialize data structures for purposes of transmission over networks as well as for storage. SBE is a developing standard for how to encode/decode FIX data structures over a binary media at high speeds with low-latency. The SBE project is most similar to Google Protocol Buffers. However, looks are quite deceiving. SBE is an order of magnitude faster and immensely more efficient for encoding and decoding. This focus on performance means application developers can turn their attention to the application logic instead of the details of serialization. There are a number of advantages to SBE beyond speed, although, speed is of primary concern.

  • SBE provides a strong typing mechanism in the form of schemas for data objects
  • SBE only generates the overhead of versioning if the schema needs to handle versioning and if so, only on decode
  • SBE uses an Intermediate Representation (IR) for decoupling schema specification, optimization, and code generation
  • SBEs use of IR will allow it to provide various data layout optimizations in the near future
  • SBE initially provides Java, C++98, and C# code generators with more on the way

What breakthrough has lead to SBE being so fast?

It isn’t new or a breakthrough. SBE has been designed and implemented with the concepts and tenants of Mechanical Sympathy. Most software is developed with abstractions to mask away the details of CPU architecture, disk access, OS concepts, etc. Not so for SBE. It’s been designed with Martin and I utilizing everything we know about how CPUs, memory, compilers, managed runtimes, etc. work and making it very fast and work _with_ the hardware instead of against it.

Martin’s Blog will have a more detailed-oriented, technical discussion sometime later on SBE. But I encourage you to look at it and try it out. The work is open to the public under an Apache Public License.

Find out more on the FIX/SBE specification and SBE on github.

———————————————–

Todd Montgomery

Todd L. Montgomery is a Vice President of Architecture for Informatica and the chief designer and implementer of the 29West low latency messaging products. The Ultra Messaging product family (formerly known as LBM) has over 190 production deployments within electronic trading across many asset classes and pioneered the broker-less messaging paradigm. In the past, Todd has held architecture positions at TIBCO and Talarian as well as lecture positions at West Virginia University, contributed to the IETF, and performed research for NASA in various software fields. With a deep background in messaging systems, high performance systems, reliable multicast, network security, congestion control, and software assurance, Todd brings a unique perspective tempered by over 20 years of practical development experience.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data | Tagged , , | Leave a comment

There Is A Silver Bullet – Really!

Last month in The Biggest Dirty Little Secret in IT I highlighted a disturbing phenomenon – that in highly data-driven organizations that have large IT departments, as they get larger they become less efficient.  In short, diseconomies of scale begin to creep in which slow down processes and drive up costs. The article went on to identify the root cause as a high degree of manual IT processes which don’t scale well. The question I will address in this article is what can we do to tackle the problem, and what is it worth? (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, CIO, Data Integration, Data Integration Platform, Enterprise Data Management, Integration Competency Centers | Tagged , , | 2 Comments

Managing Risk and Compliance in Financial Services with Informatica Vibe!

Last week at Informatica World 2013, Informatica introduced Vibe, the industry’s first and only embeddable virtual data machine (VDM), designed to embed data management into the next generation of applications for the integrated information age. This unique capability offers technology for banks and insurance companies to scale and improve their data management, integration, and governance processes to manage risk and ensure ongoing compliance with a host of industry regulations from Basel III, Dodd Frank, to Solvency II. Why is Vibe unique and how does it help with risk management and regulatory compliance? 

The data required for risk and compliances originates from tens if not hundreds of systems across all lines of business including loan origination systems, loan servicing, credit card processors, deposit servicing, securities trading, brokerage, call center, online banking, and more. Not to mention external data providers for market, pricing, positions, and corporate actions information.  The volumes are greater than ever, the systems range from legacy mainframe trading systems to mobile banking applications, the formats vary across the board from structured, semi-structured, and unstructured, and a wide range of data standards must be dealt with including MISMO®, FpML®, FIX®, ACORD®, to SWIFT to name a few.  Take all that into consideration and the data administration, management, governance, and integration work required is massive, multifaceted, and fraught with risk and hidden costs often caused by custom coded processes or use of standalone tools.

The Informatica Platform and Vibe can help by allowing our customers to take advantage of ever evolving data technologies and innovations without having to recode and develop a lean data management process that turns unique works of art into reusable artifacts across the information supply chain. In other words, Vibe powers the unique “Map Once. Deploy Anywhere.” capabilities of the Informatica Platform accelerates project delivery by 5x and makes the entire data lifecycle easier to manage and eliminates the risks, costs, and short lived value associated with hand coding or using standalone tools to do this work.  Here are some examples of Vibe for risk and compliance:

  • Built data quality rules to standardize address information, remove or consolidate duplicates, translate or standardize reference data, and other critical information to calculate risk within your ETL process or as a “Data Quality Validation” service in upstream systems
  • Build rules to standardize wire transfer data to the latest SWIFT formats within your payment hubs as well as leverage the same rules in facilitating payment transactions with your counterparties.
  • Build and execute complex parsing and transformation processes leveraging the power of Hadoop to handle large volumes of structured and unstructured data to analytics and utilize the same rules in downstream credit, operational, and market risk data warehouses.
  • Define standard data masking rules once, and leverage it when using data with sensitive information for testing and develop as well as enforcing data access rights for ongoing data privacy compliance.

 The “Map Once. Deploy Anywhere.” capabilities inherent to Vibe drive:

  • Faster adoption of new technologies and data – Banks and insurance companies can take rapid advantage of new data and technologies without having to know the details of the underlying platform, or having to hire highly specialized and costly programming resources. 
  • Reduced complexity through insulation from change – When data type, volume, source, platform or users change, financial institutions can simply redeploy their existing data integration instructions without re-specification, redesign or redevelopment on a new integration technology – like Hadoop.

Vibe is NOT a new product offering. It is a unique capability that Informatica supports through our existing platform comprised of our Data Integration, Data Quality, Master Data Management, and Informatica Life Cycle Management products.  Whether it is Dodd Frank, Basel III, FATCA, or Solvency II, with Vibe, banks and insurance companies can ensure they have the right data and increase the potential to improve how they measure risk and ensure regulatory compliance. Visit Informatica’s Banking/Capital Markets and Insurance industry solutions section of our website for more information on how we help today’s global financial services industry.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Governance, Data Integration, Data Integration Platform, Data Quality, Financial Services | Tagged , | Leave a comment

Sub-100 Nanosecond Pub/Sub Messaging: What Does It Matter?

Our announcement last week was an exciting milestone for those of us who started at 29West supporting the early high-frequency traders from 2004 to 2006. Last week, we announced the next step in a 10 year effort that has now seen us set the bar for low latency messaging lower by six orders of magnitude in Version 6.1 of Informatica Ultra Messaging with Shared Memory Acceleration (SMX). The really cool thing is that we have helped early customers like Intercontinental Exchange and Credit Suisse take advantage of the reductions from 2.5 million nanoseconds (ns) of latency to now as low as 37 ns on commodity hardware and networks without having to switch products or do major rewrites of their code.

But as I said in the title, what does it matter? Does being able to send messages to multiple receivers within a single box trading system or order matching engine in 90 ns as opposed to one microsecond really make a difference?

Well, according to a recent article by Scott Appleby on the TabbFORUM, “The Death of Alpha on Wall Street”* the only way for investment banks to find alpha or excess returns is “to find valuation correlations among markets to extract microstructure alpha”.  He states “Getco, Tradebot and Renaissance use technology to find valuation correlations among markets to extract microstructure alpha; this still works, but requires significant capital.”  What that extra hundreds of nanoseconds that SMX frees up allows a company to do is to make their matching algorithms or order routers that much smarter by doing dozens of additional complex calculations before the computer makes a decision. Furthermore, by allowing busy software developers to let the messaging layer takeover integrating software components that may be less critical to producing alpha (but very important for operational risk control like guaranteeing that messages can be captured off the single box trading system for compliance and disaster recovery) they can focus on changes in the microstructure of the markets.

The key SMX innovation is another “less is more” style engineering feat from our team. Basically SMX eliminates any copying of messages from the message delivery path. And of course if the processes in your trading system happened to be running within the same CPU on the same or different cores, this means messages are being sent within the memory cache of the core or CPU.   The other reason this matters is that because this product uniquely (as far as I know) allows zero copy shared memory communication between Java, C, and Microsoft .Net applications, developers can fully leverage the best features and the knowledge of their teams to deploy complex high-performance applications. For example, this allows third-party feed handlers built in C to communicate at extremely low latencies with algo engines written in Java.

So congrats to the UM development team for achieving this  important milestone and “thanks” to our customers for continuing to push us to provide you with that “lagniappe” of extra time that can make all the difference in the success of your trading strategies and  your businesses.

 

*- http://tabbforum.com/opinions/the-death-of-alpha-on-wall-street?utm_source=TabbFORUM+Alerts&utm_campaign=1c01537e42-UA-12160392-1&utm_medium=email&utm_term=0_29f4b8f8f1-1c01537e42-270859141

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Ultra Messaging | Tagged , , , , , | Leave a comment