Tag Archives: Data Quality

Time to Celebrate! Informatica is Once Again Positioned as a Leader in Gartner’s Magic Quadrant for Data Quality Tools!

It’s holiday season once again at Informatica and this one feels particularly special because we just received an early present from Gartner: Informatica has just been positioned as a leader in Gartner’s Magic Quadrant for Data Quality Tools report for 2014! Click here to download the full report.

Gartner's Magic Quadrant Data Quality Tools, 2014

Gartner’s Magic Quadrant Data Quality Tools, 2014

And as it turns out, this is a gift that keeps on giving.  For eight years in a row, Informatica has been ranked as a leader in Gartner’s Magic Quadrant for Data Quality Tools. In fact, for the past two years running, Informatica has been positioned highest and best for ability to execute and completeness of vision, the two dimensions Gartner measures in their report.  These results once again validate our operational excellence as well as our prescience with our data quality products offerings. Yes folks, some days it’s hard to be humble.

Consistency and leadership are becoming hallmarks for Informatica in these and other analyst reports, and it’s hardly an accident. Those milestones are the result of our deep understanding of the market, continued innovation in product design, seamless execution on sales and marketing, and relentless dedication to customer success. Our customer loyalty has never been stronger with those essential elements in place. However, while celebrating our achievements, we are equally excited about the success our customers have achieved using our data quality products.

Managing and producing quality data is indispensable in today’s data-centric world. Gaining access to clean, trusted information should be one of a company’s most important tasks, and has previously been shown to be directly linked to growth and continued innovation.

We are truly living in a digital world – a world revolving around the Internet, gadgets and apps – all of which generate data, and lots of it.  Should your organization take advantage of its increasing masses of data? You bet. But remember: only clean, trusted data has real value.  Informatica’s mission is to help you excel by turning your data into valuable information assets that you can put to good use.

To see for yourself what the industry leading data quality tool can do, click here.

And from all of our team at Informatica, Happy holidays to you and yours.

Happy Holidays!

Happy Holidays!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality | Tagged , , | 1 Comment

Take These Steps to Avoid Wasting Your Marketing Technology Budget

Avoid Wasting Your Marketing Technology Budget

Don’t Waste Your Marketing Tech Budget

This year, the irresistible pull of digital marketing met an unstoppable force: Girl Scout cookies. It’s an $800 million-a-year fundraiser that is only expected to increase with a newly announced addition of digital sales.

The New York Times reports that beginning in this month and into January, for the first time, the Girl Scouts of America will be able to sell Thin Mints and other favorites online through invite-only websites. The websites will be accompanied by a mobile app, giving customers new digital options.

As the Girl Scouts update from a door-to-door approach to include a newly introduced digital program, it’s just one more sign of where marketing trends are heading.

From digital cookies to digital marketing technology:

If 2014 is the year of the digital cookie, then 2015 will be the year of marketing technology. Here’s just a few of the strongest indicators:

  • A study found that 67% of marketing departments plan to increase spending on technology over the next two years, according to the Harvard Business Review.
  • Gartner predicts that by 2017, CMOs will outspend CIOs on IT-related expenses.
  • Also by 2017, one-third of the total marketing budget will be dedicated to digital marketing, according to survey results from Teradata.
  • A new LinkedIn/Salesforce survey found that 56% of marketers see their relationships with the CIO as very important or critical.
  • Social media is a mainstream channel for marketers, making technology for measuring and managing this channel of paramount importance. This is not just true of B2C companies. Of high level executive B2B buyers, 75% used social media to make purchasing decisions, according to a 2014 survey by market research firm IDC.

From social to analytics to email marketing, much of what marketers see in technology offerings is often labeled as “cloud-based.” While cloud technology has many features and benefits, what are we really saying when we talk about the cloud?

What the cloud means… to marketers.

Beginning around 2012, multitudes of businesses in many industries began adapting “the cloud” as a feature or a benefit to their products or services. Whether or not the business truly was cloud-based was not as clear, which led to the term “cloudwashing.” We hear the so much about cloud, it is easy for us to overlook what it really means and what the benefits really are.

The cloud is more than a buzzword – and in particular, marketers need to know what it truly means to them.

For marketers, “the cloud” has many benefits. A service that is cloud-based gives you amazing flexibility and choices over the way you use a product or service:

  • A cloud-enabled product or service can be integrated into your existing systems. For marketers, this can range from integration into websites, marketing automation systems, CRMs, point-of-sale platforms, and any other business application.
  • You don’t have to learn a new system, the way you might when adapting a new application, software, or other enterprise system. You won’t have to set aside a lot of time and effort for new training for you or your staff.
  • Due to the flexibility that lets you integrate anywhere, you can deploy a cloud-based product or service across all of your organization’s applications or processes, increasing efficiencies and ensuring that all of your employees have access to the same technology tools at the same time.
  • There’s no need to worry about ongoing system updates, as those happen automatically behind the scenes.

In 2015, marketers should embrace the convenience of cloud-based services, as they help put the focus on benefits instead of spending time managing the technology.

Are you using data quality in the cloud?

If you are planning to move data out of an on-premise application or software to a cloud-based service, you can take advantage of this ideal time to ensure these data quality best practices are in place.

Verify and cleanse your data first, before it is moved to the cloud. Since it’s likely that your move to the cloud will make this data available across your organization — within marketing, sales, customer service, and other departments — applying data quality best practices first will increase operational efficiency and bring down costs from invalid or unusable data.

There may be more to add to this list, depending on the nature of your own business. Make sure that:

  • Postal addresses are valid, accurate, current and complete
  • Email addresses are valid
  • Telephone numbers are valid, accurate, and current
  • Increase the effectiveness of future data analysis by making sure all data fields are consistent and every individual data element is clearly defined
  • Fill in missing data
  • Remove duplicate contact and customer records

Once you have cleansed and verified your existing data and move it to the cloud, use a real-time verification and cleansing solution at the point of entry or point of collection in real-time to ensure good data quality across your organization on an ongoing basis.

The biggest roadblock to effective marketing technology is: Bad data.

Budgeting for marketing technology is going to become a bigger and bigger piece of the pie (or cookie, if you prefer) for B2C and B2B organizations alike. The first step all marketers need to take to make sure those investments fully pay off and don’t go wasted is great customer data.

Marketing technology is fueled by data. A recent Harvard Business Review article listed some of the most important marketing technologies. They included tools for analytics, conversion, email, search engine marketing, remarketing, mobile, and marketing automation.

What do they all have in common? These tools all drive customer communication, engagement, and relationships, all of which require valid and actionable customer data to work at all.

You can’t plan your marketing strategy off of data that tells you the wrong things about who your customers are, how they prefer to be contacted, and what messages work the best. Make data quality a major part of your 2015 marketing technology planning to get the most from your investment.

Marketing technology is going to be big in 2015 — where do you start?

With all of this in mind, how can marketers prepare for their technology needs in 2015? Get started with this free virtual conference from MarketingProfs that is totally focused on marketing technology.

This great event includes a keynote from Teradata’s CMO, Lisa Arthur, on “Using Data to Build Strong Marketing Strategies.” Register here for the December 12 Marketing Technology Virtual Conference from MarketingProfs.

Even if you can’t make it live that day at the virtual conference, it’s still smart to sign up so you receive on-demand recordings from the sessions when the event ends. Register now!

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Data Migration, Data Quality, Data Services | Tagged , , , | Leave a comment

When Data Integration Saves Lives

When Data Integration Saves Lives

When Data Integration Saves Lives

In an article published in Health Informatics, its author, Gabriel Perna, claims that data integration could save lives, as we learn more about illnesses and causal relationships.

According to the article, in Hamilton County Ohio, it’s not unusual to see kids from the same neighborhoods coming to the hospital for asthma attacks.  Thus, researchers wanted to know if it was fact or mistaken perception that an unusually high number of children in the same neighborhood were experiencing asthma attacks.  The next step was to review existing data to determine the extent of the issues, and perhaps how to solve the problem altogether.

“The researchers studied 4,355 children between the ages of 1 and 16 who visited the emergency department or were hospitalized for asthma at Cincinnati Children’s between January 2009 and December 2012. They tracked those kids for 12 months to see if they returned to the ED or were readmitted for asthma.”

Not only were the researchers able to determine a sound correlation between the two data sets, but they were able to advance the research to predict which kids were at high-risk based upon where they live.  Thus, some of the cause and the effects have been determined.

This came about when researchers began thinking out of the box, when it comes to dealing with traditional and non-traditional medical data.  They integrated housing and census data, in this case, with that of the data from the diagnosis and treatment of the patients.  These are data sets unlikely to find their way to each other, but together they have a meaning that is much more valuable than if they just stayed in their respective silos.

“Non-traditional medical data integration has begun to take place in some medical collaborative environments already. The New York-Presbyterian Regional Health Collaborative created a medical village, which ‘goes beyond the established patient-centered medical home mode.’ It not only connects an academic medical center with a large ambulatory network, medical homes, and other providers with each other, but community resources such as school-based clinics and specialty-care centers (the ones that are a part of NYP’s network).”

The fact of the matter is that data is the key to understanding what the heck is going on when cells of sick people begin to emerge.  While researchers and doctors can treat the individual patients there is not a good understanding of the larger issues that may be at play.  In this case, poor air quality in poor neighborhoods.  Thus, they understand what problem needs to be corrected.

The universal sharing of data is really the larger solution here, but one that won’t be approached without a common understanding of the value, and funding.  As we pass laws around the administration of health care, as well as how data is to be handled, perhaps it’s time we look at what the data actually means.  This requires a massive deployment of data integration technology, and the fundamental push to share data with a central data repository, as well as with health care providers.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Integration Platform, Data Quality, Data Services | Tagged , , , | Leave a comment

Homeland = Your Average Life Insurance Company?

Or in other words: Did the agency model kill data quality?  When you watch the TV series “Homeland”, you quickly realize the interdependence between field operatives and the command center.  This is a classic agency model.  One arm gathers, filters and synthesizes information and prepares a plan but the guys on the ground use this intel to guide their sometimes ad hoc next moves.

Data QualityThe last few months I worked a lot – and I mean A LOT – with a variety of mid-sized life insurers (<$1B annual revenue) around fixing their legacy-inherited data quality problems.  Their IT departments, functioning like Operations Command Centers (intel acquisition, filter and synthesize), were inundated with requests to fix up and serve a coherent, true, enriched central view of a participant (the target) and all his policies and related entities from and to all relevant business lines (planning) to achieve their respective missions (service, retain, upsell, mitigate risk): employee benefits, broker/dealer, retirement services, etc.

The captive and often independent agents (execution); however, often run with little useful data into an operation (sales cycle) as the Ops Center is short on timely and complete information.  Imagine Carrie directing her strike team to just wing it based on their experience and dated intel from a raid a few months ago without real-time drone video feeds.  Would she be saying, “Guys, it’s always been your neck, you’ll figure it out.”  I think not.

Homeland = Your Average Life Insurance Company?

Homeland = Your Average Life Insurance Company?

This becomes apparent when talking to the actuaries, claims operations, marketing, sales, agency operations, audit, finance, strategic planning, underwriting and customer service, common denominators appeared quickly:

  • Every insurer saw the need to become customer instead of policy centric. That’s the good news.
  • Every insurer knew their data was majorly sub-standard in terms of quality and richness.
  • Every insurer agreed that they are not using existing data capture tools (web portals for agents and policy holders, CRM applications, policy and claims mgmt systems) to their true potential.
  • New books-of-business were generally managed as separate entities from a commercial as well as IT systems perspective, even if they were not net-new products, like trust products.  Cross-selling as such, even if desired, becomes a major infrastructure hurdle.
  • As in every industry, the knee-jerk reaction was to throw the IT folks at data quality problems and making it a back-office function.  Pyrrhic victory.
  • Upsell scenarios, if at all strategically targeted, are squarely in the hands of the independent agents.  The insurer will, at most, support customer insights around lapse avoidance or 401k roll-off indicators for retiring or terminated plan participants.  This may be derived from a plan sponsor (employer) census file, which may have incorrect address information.
  • Prospect and participant e-mail addresses are either not captured (process enforcement or system capability) or not validated (domain, e-mail verification), so the vast majority of customer correspondence, like scenarios, statements, privacy notices and policy changes,  travels via snail mail (and this typically per policy). Overall, only 15-50% of contacts have an “unverified” e-mail address today and of these less than a third subscribed to exclusive electronic statement delivery.
  • Postal address information is still not 99% correct, complete or current, resulting in high returned mail ($120,000-$750,000 every quarter), priority mail upgrades, statement reprints, manual change capture and shredding cost as well as the occasional USPS fine.
  • Data quality, as unbelievable as it sounds, took a back-burner to implementing a new customer data warehouse, a new claims system, a new risk data mart, etc.  They all just get filled with the same old, bad data as business users were – and I quote –“used to the quality problem already”.
  • Premium underpricing runs at 2-4% annually, foregoing millions in additional revenue, due to lack of a full risk profile.
  • Customer cost –of-acquisition (CAC) is unknown or incorrect as there is no direct, realistic tracking of agency campaign/education dollars spent against new policies written.
  • Agency historic production and projections are unclear as a dynamic enforcement of hierarchies is not available, resulting in orphaned policies generating excess tax burdens.  Often this is the case when agents move to states where they are not licensed, they passed or retired.

What does a cutting-edge insurer look like instead?  Ask Carrie Mathison and Saul Bernstein.  They already have a risk and customer EDW as well as a modern (cloud based?) CRM and claims mgmt system.  They have considered, as part of their original implementation or upgrade, capabilities required to fix the initial seed data into their analytics platforms.  Now, they are looking into pushing them back into operational systems like CRM and avoiding bad source system entries from the get-go.

They are also beyond just using data to avoid throwing more bodies in every department at “flavor-of-the-month” clean-up projects, e.g. yet another state unclaimed property matching exercise, total annual premium revenue written in state X for tax review purposes by the state tax authority.

So what causes this drastic segmentation of leading versus laggard life insurers?  In my humble opinion, it is the lack of a strategic refocusing of what the insurer can do for an agent by touching the prospects and customers directly.  Direct interaction (even limited) improves branding, shortens the sales cycle and rates based on improved insights through better data quality.

Agents (and insurers) need to understand that the wealth of data (demographic, interactions, transactions) corporate possesses already via native and inherited (via M&A) can be a powerful competitive differentiator.  Imagine if they start tapping into external sources beyond the standard credit bureaus and consumer databases; dare I say social media?

Competing based on immediate instead of long term needs (in insurance: life time earnings potential replacement), price (fees) and commission cannot be the sole answer.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Data Security | Tagged , , | Leave a comment

A New Dimension on a Data-Fueled World

A New Dimension on a Data-Fueled World

A New Dimension on a Data-Fueled World

A Data-Fueled World, Informatica’s new view on data in the enterprise.  I think that we can all agree that technology innovation has changed how we live and view every day life.  But, I want to speak about a new aspect of the data-fueled world.  This is evident now and will be shockingly present in the few years to come.  I want to address the topic of “information workers”.

Information workers deal with information, or in other words, data.  They use that data to do their jobs.  They make decisions in business with that data.  They impact the lives of their clients.

Many years ago, I was part of a formative working group researching information worker productivity.  The idea was to create an index like Labor Productivity indexes.  It was to be aimed at information worker productivity.  By this I mean the analysts, accountants, actuaries, underwriters and statisticians.  These are business information workers.  How productive are they?  How do you measure their output?  How do you calculate an economic cost of more or less productive employees?  How do you quantify the “soft” costs of passing work on to information workers?  The effort stalled in academia, but I learned a few key things.  These points underline the nature of an information worker and impacts to their productivity.

  1. Information workers need data…and lots of it
  2. Information workers use applications to view and manipulate data to get the job done
  3. Degradation, latency or poor ease of use in any of items 1 and 2 have a direct impact on productivity
  4. Items 1 and 2 have a direct correlation to training cost, output and (wait for it) employee health and retention

It’s time to make a super bold statement.  It’s time to maximize your investment in DATA. And past time to de-emphasize investments in applications!  Stated another way, applications come and go, but data lives forever.

My five-year old son is addicted to his iPad.  He’s had one since he was one-year old.  At about the age of three he had pretty much left off playing Angry Birds.  He started reading Wikipedia.  He started downloading apps from the App Store.  He wanted to learn about string theory, astrophysics and plate tectonics.  Now, he scares me a little with his knowledge.  I call him my little Sheldon Cooper.  The apps that he uses for research are so cool.  The way that they present the data, the speed and depth are amazing.  As soon as he’s mastered one, he’s on to the next one.  It won’t be long before he’s going to want to program his own apps.  When that day comes, I’ll do whatever it takes to make him successful.

And he’s not alone.  The world of the “selfie-generation” is one of rapid speed.  It is one of application proliferation and flat out application “coolness”.  High school students are learning iOS programming.  They are using cloud infrastructure to play games and run experiments.  Anyone under the age of 27 has been raised in a mélange of amazing data-fueled computing and mobility.

This is your new workforce.  And on their first day of their new career at an insurance company or large bank, they are handed an aging recycled workstation.  An old operating system follows and mainframe terminal sessions.  Then comes rich-client and web apps circa 2002.  And lastly (heaven forbid) a Blackberry.  Now do you wonder if that employee will feel empowered and productive?  I’ll tell you now, they won’t.  All that passion they have for viewing and interacting with information will disappear.  It will not be enabled in their new work day.  An outright information worker revolution would not surprise me.

And that is exactly why I say that it’s time to focus on data and not on applications.  Because data lives on as applications come and go.  I am going to coin a new phrase.  I call this the Empowered Selfie Formula.  The Empowered Selfie Formula is a way in which the focus on data liberates information workers.  They become free to be more productive in today’s technology ecosystem.

Enable a BYO* Culture

Many organizations have been experimenting with Bring Your Own Device (BYOD) programs.  Corporate stipends that allow employees to buy the computing hardware of their choice.  But let’s take that one step further.  How about a Bring Your Own Application program?  How about a Bring Your Own Codebase program?  The idea is not so far-fetched.  There are so many great applications for working with information.  Today’s generation is learning about coding applications at a rapid pace.  They are keen to implement their own processes and tools to “get the job done”.  It’s time to embrace that change.  Allow your information workers to be productive with their chosen devices and applications.

Empower Social Sharing

Your information workers are now empowered with their own flavors of device and application productivity.  Let them share it.  The ability to share success, great insights and great apps is engrained into the mindset of today’s technology users.  Companies like Tableau have become successful based on the democratization of business intelligence.  Through enabling social sharing, users can celebrate their successes and cool apps with colleagues.  This raises the overall levels of productivity as a grassroots movement.  Communities of best practices begin to emerge creating innovation where not previously seen.

Measure Productivity

As an organization it is important to measure success.  Find ways to capture key metrics in productivity of this new world of data-fueled information work.  Each information worker will typically be able to track trends in their output.  When they show improvement, celebrate that success.

Invest in “Cool”

With a new BYO* culture, make the investments in cool new things.  Allow users to spend a few dollars here and there for training online or in-person.  There they can learn new things will make them more productive.  It will also help with employee retention.  With small investment larger ROI can be realized in employee health and productivity.

Foster Healthy Competition

Throughout history, civilizations that fostered healthy competition have innovated faster.  The enterprise can foster healthy competition on metrics.  Other competition can be focused on new ways to look at information, valuable insights, and homegrown applications.  It isn’t about a “best one wins” competition.  It is a continuing round of innovation winners with lessons learned and continued growth.  These can also be centered on the social sharing and community aspects.  In the end it leads to a more productive team of information workers.

Revitalize Your Veterans

Naturally those information workers who are a little “longer in the tooth” may feel threatened.  But this doesn’t need to be the case.  Find ways to integrate them into the new culture.  Do this through peer training, knowledge transfer, and the data items listed below.  In the best of cases, they too will crave this new era of innovation.  They will bring a lot of value to the ecosystem.

There is a catch.  In order to realize success in the formula above, you need to overinvest in data and data infrastructure.  Perhaps that means doing things with data that only received lip service in the past.  It is imperative to create a competency or center of excellence for all things data.  Trusting your data centers of excellence activates your Empowered Selfie Formula.

Data Governance

You are going to have users using and building new apps and processing data and information in new and developing ways.  This means you need to trust your data.  Your data governance becomes more important.  Everything from metadata, data definition, standards, policies and glossaries need to be developed.  In this way the data that is being looked at can be trusted.  Chief Data Officers should put into place a data governance competency center.  All data feeding and coming from new applications is inspected regularly for adherence to corporate standards.  Remember, it’s not about the application.  It’s about what feeds any application and what data is generated.

Data Quality

Very much a part of data governance is the quality of data in the organization.  Also adhering to corporate standards.  These standards should dictate cleanliness, completeness, fuzzy logic and standardization.  Nothing frustrates an information worker more than building the coolest app that does nothing due to poor quality data.

Data Availability

Data needs to be in the right place at the right time.  Any enterprise data takes a journey from many places and to many places.  Movement of data that is governed and has met quality standards needs to happen quickly.  We are in a world of fast computing and massive storage.  There is no excuse for not having data readily available for a multitude of uses.

Data Security

And finally, make sure to secure your data.  Regardless of the application consuming your information, there may be people that shouldn’t see the data.  Access control, data masking and network security needs to be in place.  Each application from Microsoft Excel to Informatica Springbok to Tableau to an iOS developed application will only interact with the information it should see.

The changing role of an IT group will follow close behind.  IT will essentially become the data-fueled enablers using the principles above.  IT will provide the infrastructure necessary to enable the Empowered Selfie Formula.  IT will no longer be in the application business, aside from a few core corporation applications as a necessary evil.

Achieving a competency in the items above, you no longer need to worry about the success of the Empowered Selfie Formula.  What you will have is a truly data-fueled enterprise.  There will be a new class of information workers enabled by a data-fueled competency.  Informatica is thrilled to be an integral part of the realization that data can play in your journey.  We are energized to see the pervasive use of data by increasing numbers of information workers.  The are creating new and better ways to do business.  Come and join a data-fueled world with Informatica.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data First, Data Governance, Data Quality, Enterprise Data Management | Tagged , , , , | Leave a comment

Gaining a Data-First Perspective with Salesforce Wave

Gaining a Data-First Perspective with Salesforce Wave

Data-First with Salesforce Wave

Salesforce.com made waves (pardon the pun) at last month’s Dreamforce conference when it unveiled the Salesforce Wave Analytics Cloud. You know Big Data has reached prime-time when Salesforce, which has a history of knowing when to enter new markets, decides to release a major analytics service.

Why now? Because companies need help making sense of the data deluge, Salesforce’s CEO Marc Benioff said at Dreamforce: “Did you know 90% of the world’s data was created in the last two years? There’s going to be 10 times more mobile data by 2020, 19 times more unstructured data, and 50 times more product data by 2020.” Average business users want to understand what that data is telling them, he said. Given Salesforce’s marketing expertise, this could be the spark that gets mainstream businesses to adopt the Data-First perspective I’ve been talking about.

As I’ve said before, a Data First POV shines a light on important interactions so that everyone inside a company can see and understand what matters. As a trained process engineer, I can tell you, though, that good decisions depend on great data — and great data doesn’t just happen: At the most basic level, you have to clean it, relate it, connect and secure it  — so that information from, say, SAP, can be viewed in the same context as data from Salesforce. Informatica obviously plays a role in this. If you want to find out more, click on this link to download our Salesforce Integration for Dummies brochure.

But that’s the basics for getting started. The bigger issue — and the one so many people seem to have trouble with — is deciding which metrics to explore. Say, for example, that the sales team keeps complaining about your marketing leads. Chances are, it’s a familiar complaint. How do you discover what’s really the problem?

One obvious place to start to first look at the conversation rates for every sales rep and group. Next explore the marketing leads they do accept such as deal size, product type or customer category. Now take it deeper. Examine which sales reps like to hunt for new customers and which prefer to mine their current base. That will tell you if you’re sending opportunities to the right profiles.

The key is never looking at the sales organization as a whole. If it’s EMEA, for instance, have a look to see how France is doing selling to emerging markets vs. the team in Germany. These metrics are digital trails of human behavior. Data First allows you to explore that behavior and either optimize it or change it.

But for this exploration to pay off, you actually have to do some of the work. You can’t just job it out to an analyst. This exercise doesn’t become meaningful until you are mentally engaged in the process. And that’s how it should be: If you are a Data First company, you have to be a Data First leader.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Archiving, Data First, Data Governance, Data Integration | Tagged , , , , | Leave a comment

Analytics Stories: A Banking Case Study

Right to winAs I have shared within other post within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. In banking, the right to win increasingly comes from improving two core sets of business capabilities—risk management and customer service.

Significant change has occurred in risk management over the last few years following theAnalytics subprime crisis and the subsequent credit crunch. These environmental changes have put increased regulatory pressure upon banks around the world. Among other things, banks need to comply with measures aimed at limiting the overvaluation of real estate assets and at preventing money laundering. A key element of handling these is to ensuring that go forward business decisions are made consistently using the most accurate business data available. It seems clear that data consistency can determine the quality of business operations especially business risk.

At the same time as banks need to strengthen their business capabilities around operations, and in particular risk management, they also need to use better data to improve the loyalty of their existing customer base.

Banco Popular launches itself into the banking vanguard

Banco Popular is an early responder regarding the need for better banking data consistency. Its leadership created a Quality of Information Office (the Office uniquely is not based within IT but instead with the Office of the President) with the mandate of delivering on two business objectives:

  1. Ensuring compliance with governmental regulations occurs
  2. Improving customer satisfaction based on accurate and up-to-date information

Part of the second objective is aimed at ensuring that each of Banco Popular’s customers was offered the ideal products for their specific circumstances. This is interesting because by its nature it assists in obtainment of the first objective. To validate it achieves both mandates, the Office started by creating an “Information Quality Index”. The Index is created using many different types of data relating to each of the bank’s six million customers–including addresses, contact details, socioeconomic data, occupation data, and banking activity data. The index is expressed in percentage terms, which reflects the quality of the information collected for each individual customer. The overarching target set for the organization is a score of 90 percent—presently, the figure sits at 75 percent. There is room to grow and improve!

Current data management systems limit obtainment of its business goals

Unfortunately, the millions of records needed by the Quality Information Office are spread across different tables in the organization’s central computing system and must be combined into one information file for each customer to be useful to business users. The problem is that they had depended on third parties to manually pull and clean up this data. This approach with the above mandates proved too slow to be executed in timely fashion. This, in turn, has impacted the quality of their business capabilities for risk and customer service. According to Banco Popular, their approach did not create the index and other analyses “with the frequency that we wanted and examining the variables of interest to us,” explains Federico Solana, an analyst at the Banco Popular Quality of Information Office.

Creating the Quality Index was just too time consuming and costly. But not improving data delivery performance had a direct impact on decision making.

Automation proves key to better business processes

TrustTo speed up delivery of its Quality Index, Banco Popular determined it needed to automate it’s creation of great data—data which is trustworthy and timely. According to Tom Davenport, “you can’t be analytical without data and you can’t be really good at analytics without really good data”. (Analytics at Work, 2010, Harvard Business Press, Page 23). Banco Popular felt that automating the tasks of analyzing and comparing variables would increase the value of data at lower cost and ensuring a faster return on data.

In addition to fixing the Quality Index, Banco Popular needed to improve its business capabilities around risk and customer service automation. This aimed at improving the analysis of mortgages while reducing the cost of data, accelerating the return on data, and boosting business and IT productivity.

Everything, however, needed to start with the Quality Index. After the Quality Index was created for individuals, Banco Popular created a Quality of Information Index for Legal Entities and is planning to extend the return on data by creating indexes for Products and Activities. For the Quality Index related to legal entities, the bank included variables that aimed at preventing the consumption of capital as well as other variables used to calculate the probability of underpayments and Basel models. Variables are classified as essential, required, and desirable. This evaluation of data quality allows for the subsequent definition of new policies and initiatives for transactions, the network of branches, and internal processes, among other aspects. In addition, the bank is also working on the in-depth analysis of quality variables for improving its critical business processes including mortgages.

Some Parting Remarks

In the end, Banco Popular has shown the way forward for analytics. In banking the measures of performance are often known, however, what is problematic is ensuring the consistency of decision making across braches and locations. By working first on data quality, Banco Popular ensured that the quality of data measures are consistent and therefore, it can now focus its attentions on improving underling business effectiveness and efficiency.

Related links

Related Blogs

Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

 

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance | Tagged , , , | 1 Comment

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

I routinely have the pleasure of working with Terri Mikol, Director of Data Governance, UPMC. Terri has been spearheading data governance for three years at UPMC. As a result, she has a wealth of insights to offer on this hot topic. Enjoy her top 10 lessons learned from UPMC’s data governance journey:

1. You already have data stewards.

Commonly, health systems think they can’t staff data governance such as UPMC has becauseof a lack of funding. In reality, people are already doing data governance everywhere, across your organization! You don’t have to secure headcount; you locate these people within the business, formalize data governance as part of their job, and provide them tools to improve and manage their efforts.

2. Multiple types of data stewards ensure all governance needs are being met.

Three types of data stewards were identified and tasked across the enterprise:

I. Data Steward. Create and maintain data/business definitions. Assist with defining data and mappings along with rule definition and data integrity improvement.

II. Application Steward. One steward is named per application sourcing enterprise analytics. Populate and maintain inventory, assist with data definition and prioritize data integrity issues.

III. Analytics Steward. Named for each team providing analytics. Populate and maintain inventory, reduce duplication and define rules and self-service guidelines.

3. Establish IT as an enabler.

IT, instead of taking action on data governance or being the data governor, has become anenabler of data governance by investing in and administering tools that support metadata definition and master data management.

4. Form a governance council.

UPMC formed a governance council of 29 executives—yes, that’s a big number but UPMC is a big organization. The council is clinically led. It is co-chaired by two CMIOs and includes Marketing, Strategic Planning, Finance, Human Resources, the Health Plan, and Research. The council signs off on and prioritizes policies. Decision-making must be provided from somewhere.

5. Avoid slowing progress with process.

In these still-early days, only 15 minutes of monthly council meetings are spent on policy and guidelines; discussion and direction take priority. For example, a recent agenda item was “Length of Stay.” The council agreed a single owner would coordinate across Finance, Quality and Care Management to define and document an enterprise definition for “Length of Stay.”

6. Use examples.

Struggling to get buy-in from the business about the importance of data governance? An example everyone can relate to is “Test Patient.” For years, in her business intelligence role, Terri worked with “Test Patient.” Investigation revealed that these fake patients end up in places they should not. There was no standard for creation or removal of test patients, which meant that test patients and their costs, outcomes, etc., were included in analysis and reporting that drove decisions inside and external to UPMC. The governance program created a policy for testing in production should the need arise.

7. Make governance personal through marketing.

Terri holds monthly round tables with business and clinical constituents. These have been a game changer: Once a month, for two hours, ten business invitees meet and talk about the program. Each attendee shares a data challenge, and Terri educates them on the program and illustrates how the program will address each challenge.

8. Deliver self-service.

Providing self-service empowers your users to gain access and control to the data they need to improve their processes. The only way to deliver self-service business intelligence is to make metadata, master data, and data quality transparent and accessible across the enterprise.

9. IT can’t do it alone.

Initially, IT was resistant to giving up control, but now the team understands that it doesn’t have the knowledge or the time to effectively do data governance alone.

10. Don’t quit!

Governance can be complicated, and it may seem like little progress is being made. Terri keeps spirits high by reminding folks that the only failure is quitting.

Getting started? Assess the data governance maturity of your organization here: http://governyourdata.com/

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Governance, Data Integration, Data Security | Tagged , , , | Leave a comment

Raised Expectations and New Discoveries with Great Customer Data

New Discoveries with Great Customer Data

New Discoveries with Great Customer Data

We have a landing! On November 12, the Rosetta probe arrived at its destination, a comet 300 million miles away from Earth.

Fulfilling its duty after a 10-year journey, Rosetta dropped its lander, Philae, to gather data from the comet below.

Everything about the comet so far is both challenging and fascinating, from its advanced age – 4.6 billion years old, to its hard-to- pronounce name, Churyumov-Gerasimenko.

The size of Gerasimenko? Roughly that of lower Manhattan. The shape wasn’t the potato-like image some anticipated of a typical comet. Instead it had a form that was compared to that of a “rubber-duck,” making landing trickier than expected.

To add one more challenging feature, the comet was flying at 38,000 mph. The feat of landing the probe onto the comet has been compared to hitting a speeding bullet with another speeding bullet.

All of this would have been impossible if the ESA didn’t have serious data on the lofty goal they set forth.

As this was happening, on the same day there was a quieter landing: Salesforce and LinkedIn paired up to publish research they conducted on marketing strategies by surveying 900+ senior-level B2B and B2C marketers through their social networks about their roles, marketing trends, and challenges they face.

This one finding stood out to me: “Only 17% of respondents said their company had fully integrated their customer data across all areas of the organization. However, 97% of those ‘fully integrated’ marketing leaders said they were at least somewhat effective at creating a cohesive customer journey across all touchpoints and channels.”

While not as many companies were implementing customer data like they should, those who did felt the strong impact of the benefits. It’s like knowing the difference between interacting with a potato-shaped company, or a B2C, vs. interacting with a rubber-duck-shaped company, or a B2B, for example.

Efficient customer data could help you learn how to land each one properly. While the methods for dealing with both might be similar, they’re not identical, and taking the wrong approach could mean a failed landing. One of the conclusions from the survey showed there is a “direct link between how well a marketer integrated customer data and the self-reported successes of that brand’s strategy.”

When interviewed by MSNBC on the comet landing, Bill Nye, also known as “the Science Guy,” had many positive things to say on the historic event. One question he answered was why do programs like the ESA exist – or basically, why do we go to space?

Nye had two replies: “It raises the expectations of your society,” and “You’re going to make discoveries.”

customer dataMarketers armed with insights from powerful customer data can have their own “landing on a comet” moment. Properly integrated customer data means you’ll be making new discoveries about your own clientele while simultaneously raising the expectations of your business.

The world couldn’t progress forward without quality data, whether in the realm of retail or planetary science. We put a strong emphasis on validating and cleanse your customer data at the point of entry or the point of collection.

Check out a quick video demo here of three data quality solutions: Email Verification, Address Verification, and Phone Validation.

 

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Data Services | Tagged , , , | Leave a comment

How Email Marketers Can Keep Up With Changes to Their Industry

Keep Up With Changes

Keep Up With Changes

Email has come a long way since its beginning. Two years after the U.S. launched a rocket to the moon, programmer Raymond Tomlinson launched the first email, a message that read “QWERTYUIOP” in 1971.

In 1991, when the World Wide Web was created, email then had the opportunity to evolve into the mainstream form of communication it is today.

The statistics for modern day email are staggering. Worldwide, there are 2.2 billion email users as of 2012, according to MarketingProfs.

With all these messages flying about, you know there’s going to be some email overload. Google’s Gmail alone has 425 million users worldwide. ESPs know people have too much email to deal with, and there’s a lot of noise out there. More than 75% of the world’s email is spam.

Gmail is one of the applications that recently responded to this problem, and all email marketers need to be aware.

On October 22, Google announced Inbox.

Google’s Inbox takes several steps to bring structure to the abundant world of email with these features:

  • Categorizes and bundles emails.
  • Highlights important content within the body of the email.
  • Allows users to customize messages by adding their own reminders.

This latest update to Gmail is just one small way that the landscape of email marketing and audience preferences is changing all the time.

As we integrate more technology into our daily lives, it only makes sense that we use digital messages as a means of communication more often. What will this mean for email in the future? How will marketers adjust to the new challenges email marketing will present at larger volumes, with audiences wanting more segmentation and personalization?

All About eMail Virtual Conference

All About eMail Virtual Conference

One easy way to stay on top of these and other changes to the e-mail landscape is talking to your peers and experts in the industry. Luckily, an opportunity is coming up — and it’s free.

Make sure you check out the All About eMail Virtual Conference & Expo on November 13. It’s a virtual event, which means you can attend without leaving your desk!

It’s a one-day event with the busy email marketer in mind. Register for free and be sure to join us for our presentation, “Maximizing Email Campaign Performance Through Data Quality.”

Other strategic sessions include email marketing innovations presented by Forrester Research, mobile email, ROI, content tips, email sending frequency, and much more. (See the agenda here.)

This conference is just one indication that email marketing is still relevant, but only if email marketers adjust to changing audience preferences. With humble beginnings in 1971 email has come a long way. Email marketing has the best ROI in the business, and will continue to have value long into the future.

FacebookTwitterLinkedInEmailPrintShare
Posted in Customer Acquisition & Retention, Data Quality, Retail | Tagged , , , , , | Leave a comment