Category Archives: Data Quality

Four Steps Email Marketers Should Take Before Using Old Customer Data

Customer Data

Using Old Customer Data?

A crowd of people gathered around a cornerstone of a building in Brooklyn. They could barely contain their excitement that October day in 2014.

A time capsule from 1950 was about to be opened, revealing mysterious contents. No one knew what to expect. Historical treasures? Letters from the past? Maybe even gold bars or valuable collectibles? The crowd strained to hear what officials would announce to the crowd as they opened the container.

“It’s all full of mud!” the city officials exclaimed to the crowd, as the box was unearthed and opened. Photographers clicked away at the mottled contents. While underground, water and dirt had crept in to the metal box, leaving the crowd in 2014 with not much to see but oxidized orange sludge.

A few illegible newspapers, ruined microfilm, and a coin were found, but the contents were roundly considered a bust.

Email marketers must feel much the same excitement mixed with uncertainty when they receive old contact lists to open up and add to their email lists. This is a scenario many email marketers face.

This kind of legacy data could come from various circumstances:

  • One company joins another, and submits legacy data that could be fruitful, but most likely needs attention and management.
  • Unused legacy data starts looking more appealing to re-use around peak sales times, when every email that goes out could spell more sales.
  • A new member of the sales team submits new contacts that have never been added to your database.
  • An existing business unit’s data is combined with yours due to new processes that remove data silos for full integration across your organization.

It’s not something that happens all of the time, but eventually in email marketing, you may have to decide how to handle legacy data.

Whatever the source, legacy data needs your attention to derive value. Just as the archivists who opened the 1950s time capsule remarked that a little waterproofing could have gone a long way, your data needs care and management so it won’t risk your existing email marketing programs. Contacts that are on one of these legacy lists could easily have changed their email addresses, or forgotten they were ever on these lists in the first place.

Step One

Here is the one part of this blog post that you should not skip over: The first thing you should do is explore the actual sources of this legacy data. If the sources indicated that these email addresses were opted-in initially, then you can proceed with the next steps listed below.

If you cannot find any evidence that these contacts ever opted in to email communication (perhaps the list was purchased before you were ever involved), definitely leave that data alone.

Moving forward with any kind of contact to these email addresses could be a violation of the US CAN-SPAM act, or the newer Canadian CASL law. Sending to even one of those old email addresses could greatly hurt your email marketing efforts as whole.

Once you know the opt-in status of your legacy list, the most important things to consider with legacy data are validation, cleansing, and opting in these contacts anew.

Step Two

Email list cleaning helps remove invalid email addresses (especially when 30% of email addresses change on most contact lists year over year). This is done through email validation, a 4-step process that checks for email standards in the address and determines if a domain name and username both exist. By validating your contact lists, you avoid a high bounce rate, which can hurt your overall email marketing efforts by bringing down your sender reputation.

Step Three

Using email hygiene to perform list cleaning will remove any possible threats to your sender reputation. Email hygiene works as a forensics unit for your email contact lists. It lets you find suspicious email addresses so you can remove them – before contacting them. This kind of bad contact data is especially likely to show up in legacy data.

You should do these two steps – email list cleaning and email hygiene – first, before you ever email these legacy contacts.

Step Four

Finally, use some creativity to send these contacts an email asking them to opt back in. It is best to send these out in small batches over time, instead of all at once. Any contacts who do not respond should not be contacted any longer.

Obviously, all of these steps will drastically reduce the number of contacts on your legacy data list. However, contacting these previously opted-in email addresses without taking any of these steps is not wise. Many of them are likely to report you, because they will not have opted in to your communications, or won’t remember doing so.

Don’t be disappointed like the openers of the time capsule, who were let down when nothing too exciting was found. Follow these steps to bring down the risks that legacy data can create, and you can find value.

Every email marketer needs to know how to protect their organization’s sender reputation and reduce the bounce rate for better customer communication. Phone numbers and mailing addresses are also key pieces of the contact record that need to be verified before they are used and on an ongoing basis.

How does that work? To learn more about validating your contact data in real-time and using it to communicate with customers in the channels they prefer, attend this live webinar coming up on May 21. Informatica Data as a Service representatives will be showing you each step to reduce costs, reach more customers, and manage your data quality.

Webinar May 21 at 2 pm EST: Register Now
Achieving Great Customer Communication with Data Quality and Mobile Services

 

Share
Posted in CMO, Customer Acquisition & Retention, DaaS, Data Quality, Informatica Events | Tagged , , , , | Leave a comment

Great Customer Experiences Start with Great Customer Data

TCRM

Are your customer-facing teams empowered with the great customer data they need to deliver great customer experiences?

On Saturday, I got a call from my broadband company on my mobile phone. The sales rep pitched a great limited-time offer for new customers. I asked him whether I could take advantage of this great offer as well, even though I am an existing customer. He was surprised.  “Oh, you’re an existing customer,” he said, dismissively. “No, this offer doesn’t apply to you. It’s for new customers only. Sorry.” You can imagine my annoyance.

If this company had built a solid foundation of customer data, the sales rep would have had a customer profile rich with clean, consistent, and connected information as reference. If he had visibility into my total customer relationship with his company, he’d know that I’m a loyal customer with two current service subscriptions. He’d know that my husband and I have been customers for 10 years at our current address. On top of that, he’d know we both subscribed to their services while live at separate addresses before we were married.

Unfortunately, his company didn’t arm him with the great customer data he needs to be successful. If they had, he could have taken the opportunity to offer me one of the four services I currently don’t subscribe to—or even a bundle of services. And I could have shared a very different customer experience.

Every customer interaction counts

Executives at companies of all sizes talk about being customer-centric, but it’s difficult to execute on that vision if you don’t manage your customer data like a strategic asset. If delivering seamless, integrated, and consistent customer experiences across channels and touch points is one of your top priorities, every customer interaction counts. But without knowing exactly who your customers are, you cannot begin to deliver the types of experiences that retain existing customers, grow customer relationships and spend, and attract new customers.

How would you rate your current ability to identify your customers across lines of business, channels and touch points?

Many businesses, however, have anything but an integrated and connected customer-centric view—they have a siloed and fragmented channel-centric view. In fact, sales, marketing, and call center teams often identify siloed and fragmented customer data as key obstacles preventing them from delivering great customer experiences.

ChannelCRM

Many companies are struggling to deliver great customer experiences across channels, because  their siloed systems give them a channel-centric view of customers

According to Retail Systems Research, creating a consistent customer experience remains the most valued capability for retailers, but 55 % of those surveyed indicated their biggest inhibitor was not having a single view of the customer across channels.

Retailers are not alone. An SVP of marketing at a mortgage company admitted in an Argyle CMO Journal article that, now that his team needs to deliver consistent customer experiences across channels and touch points, they realize they are not as customer-centric as they thought they were.

Customer complexity knows no bounds

The fact is, businesses are complicated, with customer information fragmented across divisions, business units, channels, and functions.

Citrix, for instance, is bringing together valuable customer information from 4 systems. At Hyatt Hotels & Resorts, it’s about 25 systems. At MetLife, it’s 70 systems.

How many applications and systems would you estimate contain valuable customer information at your company?

Based on our experience working with customers across many industries, we know the total customer relationship allows:

  • Marketing to boost response rates by better segmenting their database of contacts for personalized marketing offers.
  • Sales to more efficiently and effectively cross-sell and up-sell the most relevant offers.
  • Customer service teams to resolve customers’ issues immediately, instead of placing them on hold to hunt for information in a separate system.

If your marketing, sales, and customer service teams are struggling with inaccurate, inconsistent, and disconnected customer information, it is costing your company revenue, growth, and success.

Transforming customer data into total customer relationships

Informatica’s Total Customer Relationship Solution fuels business and analytical applications with clean, consistent and connected customer information, giving your marketing, sales, e-commerce and call center teams access to that elusive total customer relationship. It not only brings all the pieces of fragmented customer information together in one place where it’s centrally managed on an ongoing basis, but also:

  • Reconciles customer data: Your customer information should be the same across systems, but often isn’t. Assess its accuracy, fixing and completing it as needed—for instance, in my case merging duplicate profiles under “Jakki” and “Jacqueline.”
  • Reveals valuable relationships between customers: Map critical connections­—Are individuals members of the same household or influencer network? Are two companies part of the same corporate hierarchy? Even link customers to personal shoppers or insurance brokers or to sales people or channel partners.
  • Tracks thorough customer histories: Identify customers’ preferred locations; channels, such as stores, e-commerce, and catalogs; or channel partners.
  • Validates contact information: Ensure email addresses, phone numbers, and physical addresses are complete and accurate so invoices, offers, or messages actually reach customers.
PCRM

With a view of the Total Customer Relationship, teams are empowered to deliver great customer experiences

This is just the beginning. From here, imagine enriching your customer profiles with third-party data. What types of information help you better understand, sell to, and serve your customers? What are your plans for incorporating social media insights into your customer profiles? What could you do with this additional customer information that you can’t do today?

We’ve helped hundreds of companies across numerous industries build a total customer relationship view. Merrill Lynch boosted marketing campaign effectiveness by 30 percent. Citrix boosted conversion rates by 20%. A $60 billion global manufacturer improved cross-sell and up-sell success by 5%. A hospitality company boosted cross-sell and up-sell success by 60%. And Logitech increased sales across channels, including their online site, retail stores, and distributors.

Informatica’s Total Customer Relationship Solution empowers your people with confidence, knowing that they have access to the kind of great customer data that allows them to surpass customer acquisition and retention goals by providing consistent, integrated, and seamless customer experiences across channels. The end result? Great experiences that customers are inspired to share with their family and friends at dinner parties and on social media.

Do you have a terrible customer experience or great customer experience to share? If so, please share them with us and readers using the Comment option below.

Share
Posted in B2B, Business Impact / Benefits, Business/IT Collaboration, CIO, CMO, Customer Acquisition & Retention, Data Integration, Data Quality, Data Services, Enterprise Data Management, Master Data Management, Total Customer Relationship | Tagged , , , , , , , , , | Leave a comment

5 Best Practices for Effective Supplier Information Management

This article was originally posted in Supply and Demand Chain Executive.

Supplier Information Management

Supplier Information Management

Supplier data management is costing organizations millions of dollars each year. According to AMR Research/Gartner, supplier management organizations have increased their employee headcount and system resources by 35%, and are spending up to $1,000 per supplier annually, to manage their supplier information across the enterprise.

Why is clean, consistent and connected supplier information important for effective supplier management?

In larger organizations, supplier information is typically fragmented across departmental, line of business and/or regional applications. In most cases, this means it’s inaccurate, inconsistent, incomplete and disconnected across the different siloed systems. Adding, changing or correcting information in one system doesn’t automatically update it in other systems. As a result, supply chain, sourcing and buying teams struggle to get access to a single view of the supplier so they can understand the total supplier relationship across the business. As a result, it’s difficult to achieve their goals, such as:

  • Quickly and accurately assessing supplier risk management and compliance
  • Accelerating supplier onboarding to get products to market quickly
  • Improving supplier collaboration or supplier relationship management
  • Quickly and accurately evaluating supplier spend management
  • Monitoring supplier performance management

To effectively manage global supply chain, sourcing and procurement activities, organizations must be able to quickly and easily answer questions about their suppliers’ or vendors’ companies, contacts, raw materials or products and performance.

Quite often, organizations have separate procurement teams in different regions and they have their own regional applications. As a result, the same supplier may exist several times in the company’s supplier management system, but captured with different company names. The buying teams don’t have a single trusted view of their global supplier information. So, the different teams would purchase one product from the same supplier – each of them with different pricing and payment terms – without even knowing about it. They don’t have a clear understanding of the total relationship with that vendor and cannot benefit from negotiated corporate discounts.

If the data that’s fueling operational and analytical systems isn’t clean, consistent and connected, they cannot quickly and easily access the information they need to answer these questions and manage their supply chain, sourcing or procurement operations efficiently or effectively. And this results in ineffectiveness, missed procurement opportunities and high administration costs.

For example, a fashion retailer needs to ensure that all compliance documents of its global vendors are current and complete. If the documents are incomplete or expired, the retailer could face serious problems and might risk penalties, safety issues and costs related to its supplier incompliance.

A Business Value Assessment recently conducted by Informatica among companies across all major industries, business models and sizes, revealed that by leveraging trusted data quality, annual supplier spend could be reduced by an average of $6M. Breaking this down by industry, the possible annual reduction in supplier spending, thanks to improved supplier information and data quality, could be $9.02M for Consumer Packaged Goods (CPG), $4.2M for retail and $2.76M for industrial manufacturing companies.

To reduce costs related to poor supplier information quality, these are the five best practices that will help your company achieve a more effective supplier relationship management:

  1. Make it strategic
    Get senior management buy-in and stakeholder support. Make the business case and get the time, money and resources you’ll need.
  2. Leave your data where it is
    Effective supplier relationship management lets you leave your data where it naturally lives: in the apps and data stores your business users depend on. You just need to identify these places, so you can access the data and share clean, consistent and connected supplier data back.
  3. Apply Data Quality at the application level
    It’s far better to clean your data at the source, before trying to combine it with other data.
    Apply standards and practices at the application level, to ensure you’re working with the most accurate and complete data, and your mastering job will be much, much easier and deliver far better results.
  4. Use specialized Master Data Management and Data Integration platforms
    It takes specialized technology that’s optimized for collecting, reconciling, managing, and linking diverse data sets (as well as resolving duplicates and managing hierarchies) to achieve total supplier information management. Your program is going to have to relate to the supplier domain, as well as to other, equally important data types. Attempting this with homemade integration tools or point-to-point integrations is a major mistake. This wheel has already been invented.
  5. Share clean data with key supplier apps on an ongoing basis
    It’s no good having clean, consistent and connected supplier data if you can’t deploy it to the point of use: to the applications and analytics teams and tools that can turn it into insight (and money)—including your enterprise data warehouse, where spend analytics happen.

These best practices may seem basic or obvious, but failing to apply them is the main reason global supplier data programs stumble. They will help fuel operational and analytical applications with clean, consistent and connected supplier information for a more accurate view of suppliers’ performance, compliance and risk. As a result, supply chain, sourcing and buying teams will be empowered to cut costs by negotiating more favorable pricing and payment terms and streamlining their processes.

Related blog:

At Valspar Data Management is Key to Controlling Purchasing Costs

Share
Posted in DaaS, Data Quality, Master Data Management, PiM | Tagged , , , | Leave a comment

What are Incorrect Addresses Costing your Company?

I live in a small town in Maine. Between my town and the surrounding three towns, there are seven Main Streets and three Annis Roads or Lanes (and don’t get me started on the number of Moose Trails). If your insurance company wants to market to or communicate with someone in my town or one of the surrounding towns, how can you ensure that the address that you are sending material to is correct? What is the cost if material is sent to an incorrect or outdated address? What is the cost to your insurance company if a provider sends the bill out to the wrong ?

How much is poor address quality costing your business? It doesn’t just impact marketing where inaccurate address data translates into missed opportunity – it also means significant waste in materials, labor, time and postage . Bills may be delivered late or returned with sender unknown, meaning additional handling times, possible repackaging, additional postage costs (Address Correction Penalties) and the risk of customer service issues. When mail or packages don’t arrive, pressure on your customer support team can increase and your company’s reputation can be negatively impacted. Bills and payments may arrive late or not at all directly impacting your cash flow. The cost of bad address data causes inefficiencies and raises costs across your entire organization.

The best method for handling address correction is through a validation and correction process:

Address+Doctor

What are Incorrect Addresses Costing your Company? There are Steps to Follow

When trying to standardize member or provider information one of the first places to look is address data. If you can determine that John Q Smith that lives at 134 Main St in Northport, Maine 04843 is the same John Q Smith that lives at 134 Maine Street in Lincolnville, Maine 04849, you have provided a link between two members that are probably considered distinct in your systems. Once you can validate that there is no 134 Main St in Northport according to the postal service, and then can validate that 04849 is a valid zip code for Lincolnville – you can then standardize your address format to something along the lines of: 134 MAIN ST LINCOLNVILLE,ME 04849. Now you have a consistent layout for all of your addresses that follows postal service standards. Each member now has a consistent address which is going to make the next step of creating a golden record for each member that much simpler.

Think about your current method of managing addresses. Likely, there are several different systems that capture addresses with different standards for what data is allowed into each field – and quite possibly these independent applications are not checking or validating against country postal standards.  By improving the quality of address data, you are one step closer to creating high quality data that can provide the up-to-the minute accurate reporting your organization needs to succeed.

Share
Posted in 5 Sales Plays, B2B, Big Data, Business Impact / Benefits, Customer Acquisition & Retention, Customer Services, Customers, Data Quality, Data Synchronization, Healthcare, Master Data Management, Total Customer Relationship | Tagged , , , , | Leave a comment

Master Data Management in Oil and Gas Industry

MDM_Oil+Gas Industry

Master Data Management in Oil and Gas Industry

The Oil and Gas (O&G) industry is an important backbone of every economy. It is also an industry that has weathered the storm of constantly changing economic trends, regulations and technological innovations. O&G companies by nature have complex and intensive data processes. For a profitable functioning under changing trends, policies and guidelines, O&G’s need to manage these data processes really well.

The industry is subject to pricing volatility based on microeconomic pattern of supply and demand affected by geopolitical developments, economic meltdown and public scrutiny. The competition from other sources such as cheap natural gas and low margins are adding fuel to the burning fire making it hard for O&G’s to have a sustainable and predictable outcome.

A recent PWC survey of oil and gas CEOs similarly concluded that “energy CEOs can’t control market factors such as the world’s economic health or global oil supply, but can change how they respond to market conditions, such as getting the most out of technology investments, more effective use of partnerships and diversity strategies.”  The survey also revealed that nearly 80% of respondents agreed that digital technologies are creating value for their companies when it comes to data analysis and operational efficiency.

O&G firms run three distinct business operations; upstream exploration & production (E&P’s), midstream (storage & transportation) and downstream (refining & distribution). All of these operations need a few core data domains being standardized for every major business process. However, a key challenge faced by O&G companies is that this critical core information is often spread across multiple disparate systems making it hard to take timely decisions. To ensure effective operations and to grow their asset base, it is vital for these companies to capture and manage critical data related to these domains.

E&P’s core data domains include wellhead/materials (asset), geo-spatial location data and engineer/technicians (associate). Midstream includes trading partners and distribution and downstream includes commercial and residential customers. Classic distribution use cases emerge around shipping locations, large-scale clients like airlines and other logistics providers buying millions of gallons of fuel and industrial lube products down to gas station customers. The industry also relies heavily on reference data and chart of accounts for financial cost and revenue roll-ups.

The main E&P asset, the well, goes through its life cycle and changes characteristics (location, ID, name, physical characterization, depth, crew, ownership, etc.), which are all master data aspects to consider for this baseline entity. If we master this data and create a consistent representation across the organization, it can then be linked to transaction and interaction data so that O&G’s can drive their investment decisions, split cost and revenue through reporting and real-time processes around

  • Crew allocation
  • Royalty payments
  • Safety and environmental inspections
  • Maintenance and overall production planning

E&P firms need a solution that allows them to:

  • Have a flexible multidomain platform that permits easier management of different entities under one solution
  • Create a single, cross-enterprise instance of a wellhead master
  • Capture and master the relationships between the well, equipment, associates, land and location
  • Govern end-to-end management of assets, facilities, equipment and sites throughout their life cycle

The upstream O&G industry is uniquely positioned to take advantage of vast amount of data from its operations. Thousands of sensors at the well head, millions of parts in the supply chain, global capital projects and many highly trained staff create a data-rich environment. A well implemented MDM brings a strong foundation for this data driven industry providing clean, consistent and connected information about the core master data so these companies can cut the material cost, IT maintenance and increase margins.

To know more on how you can achieve upstream operational excellence with Informatica Master Data Management, check out this recorded webinar with @OilAndGasData

~Prash
@MDMGeek
www.mdmgeek.com

Share
Posted in Data Quality, Operational Efficiency, Utilities & Energy | Tagged , , , | Leave a comment

What’s Driving Core Banking Modernization?

Renew

What’s Driving Core Banking Modernization

When’s the last time you visited your local branch bank and spoke to a human being? How about talking to your banker over the phone?  Can’t remember?  Well you’re not alone and don’t worry, it’s not a bad thing. The days of operating physical branches with expensive workers to greet and service customers  are being replaced with more modern and customer friendly mobile banking applications that allow consumers to deposit checks from the phone, apply for a mortgage and sign closing documents electronically, to eliminating the need to go to an ATM and get physical cash by using mobile payment solutions like Apple Pay.  In fact, a new report titled ‘Bricks + Clicks: Building the Digital Branch,’ from Jeanne Capachin and Jim Marous takes an in-depth look at how banks and credit unions are changing their branch and customer channel strategies to meet the demand of today’s digital banking customer.

Why am I talking about this? These market trends are dominating the CEO and CIO agenda in today’s banking industry. I just returned from the 2015 IDC Asian Financial Congress event in Singapore where the digital journey for the next generation bank was a major agenda item. According the IDC Financial Insights, global banks will invest $31.5B USD in core banking modernization to enable these services, improve operational efficiency, and position these banks to better compete on technology and convenience across markets. Core banking modernization initiatives are complex, costly, and fraught with risks. Let’s take a closer look. (more…)

Share
Posted in Application Retirement, Architects, Banking & Capital Markets, Data Migration, Data Privacy, Data Quality, Vertical | Tagged , , | Leave a comment

Startup Winners of the Informatica Data Mania Connect-a-Thon

Last week was Informatica’s first ever Data Mania event, held at the Contemporary Jewish Museum in San Francisco. We had an A-list lineup of speakers from leading cloud and data companies, such as Salesforce, Amazon Web Services (AWS), Tableau, Dun & Bradstreet, Marketo, AppDynamics, Birst, Adobe, and Qlik. The event and speakers covered a range of topics all related to data, including Big Data processing in the cloud, data-driven customer success, and cloud analytics.

While these companies are giants today in the world of cloud and have created their own unique ecosystems, we also wanted to take a peek at and hear from the leaders of tomorrow. Before startups can become market leaders in their own realm, they face the challenge of ramping up a stellar roster of customers so that they can get to subsequent rounds of venture funding. But what gets in their way are the numerous data integration challenges of onboarding customer data onto their software platform. When these challenges remain unaddressed, R&D resources are spent on professional services instead of building value-differentiating IP.  Bugs also continue to mount, and technical debt increases.

Enter the Informatica Cloud Connector SDK. Built entirely in Java and able to browse through any cloud application’s API, the Cloud Connector SDK parses the metadata behind each data object and presents it in the context of what a business user should see. We had four startups build a native connector to their application in less than two weeks: BigML, Databricks, FollowAnalytics, and ThoughtSpot. Let’s take a look at each one of them.

BigML

With predictive analytics becoming a growing imperative, machine-learning algorithms that can have a higher probability of prediction are also becoming increasingly important.  BigML provides an intuitive yet powerful machine-learning platform for actionable and consumable predictive analytics. Watch their demo on how they used Informatica Cloud’s Connector SDK to help them better predict customer churn.

Can’t play the video? Click here, http://youtu.be/lop7m9IH2aw

Databricks

Databricks was founded out of the UC Berkeley AMPLab by the creators of Apache Spark. Databricks Cloud is a hosted end-to-end data platform powered by Spark. It enables organizations to unlock the value of their data, seamlessly transitioning from data ingest through exploration and production. Watch their demo that showcases how the Informatica Cloud connector for Databricks Cloud was used to analyze lead contact rates in Salesforce, and also performing machine learning on a dataset built using either Scala or Python.

Can’t play the video? Click here, http://youtu.be/607ugvhzVnY

FollowAnalytics

With mobile usage growing by leaps and bounds, the area of customer engagement on a mobile app has become a fertile area for marketers. Marketers are charged with acquiring new customers, increasing customer loyalty and driving new revenue streams. But without the technological infrastructure to back them up, their efforts are in vain. FollowAnalytics is a mobile analytics and marketing automation platform for the enterprise that helps companies better understand audience engagement on their mobile apps. Watch this demo where FollowAnalytics first builds a completely native connector to its mobile analytics platform using the Informatica Cloud Connector SDK and then connects it to Microsoft Dynamics CRM Online using Informatica Cloud’s prebuilt connector for it. Then, see FollowAnalytics go one step further by performing even deeper analytics on their engagement data using Informatica Cloud’s prebuilt connector for Salesforce Wave Analytics Cloud.

Can’t play the video? Click here, http://youtu.be/E568vxZ2LAg

ThoughtSpot

Analytics has taken center stage this year due to the rise in cloud applications, but most of the existing BI tools out there still stick to the old way of doing BI. ThoughtSpot brings a consumer-like simplicity to the world of BI by allowing users to search for the information they’re looking for just as if they were using a search engine like Google. Watch this demo where ThoughtSpot uses Informatica Cloud’s vast library of over 100 native connectors to move data into the ThoughtSpot appliance.

Can’t play the video? Click here, http://youtu.be/6gJD6hRD9h4

Share
Posted in B2B, Business Impact / Benefits, Cloud, Data Integration, Data Integration Platform, Data Privacy, Data Quality, Data Services, Data Transformation | Tagged , , , , , | Leave a comment

Informatica and Hortonworks Talk Analytics in Insurance

analytics

Informatica and Hortonworks Talk Analytics in Insurance

On March 25th, Josh Lee, Global Director for Insurance Marketing at Informatica and Cindy Maike, General Manager, Insurance at Hortonworks, will be joining the Insurance Journal in a webinar on “How to Become an Analytics Ready Insurer”.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Josh and Cindy exchange perspectives on what “analytics ready” really means for insurers, and today we are sharing some of our views (join the webinar to learn more). Josh and Cindy offer perspectives on the five questions posed here. Please join Insurance Journal, Informatica and Hortonworks on March 25th for more on this exciting topic.

See the Hortonworks site for a second posting of this blog and more details on exciting innovations in Big Data.

  1. What makes a big data environment attractive to an insurer?

CM: Many insurance companies are using new types of data to create innovative products that better meet their customers’ risk needs. For example, we are seeing insurance for “shared vehicles” and new products for prevention services. Much of this innovation is made possible by the rapid growth in sensor and machine data, which the industry incorporates into predictive analytics for risk assessment and claims management.

Customers who buy personal lines of insurance also expect the same type of personalized service and offers they receive from retailers and telecommunication companies. They expect carriers to have a single view of their business that permeates customer experience, claims handling, pricing and product development. Big data in Hadoop makes that single view possible.

JL: Let’s face it, insurance is all about analytics. Better analytics leads to better pricing, reduced risk and better customer service. But here’s the issue. Existing data sources are costly in storing vast amounts of data and inflexible to adapt to changing needs of innovative analytics. Imagine kicking off a simulation or modeling routine one evening only to return in the morning and find it incomplete or lacking data that requires a special request of IT.

This is where big data environments are helping insurers. Larger, more flexible data sets allowing longer series of analytics to be run, generating better results. And imagine doing all that at a fraction of the cost and time of traditional data structures. Oh, and heaven forbid you ask a mainframe to do any of this.

  1. So we hear a lot about Big Data being great for unstructured data.  What about traditional data types that have been used in insurance forever?

CM: Traditional data types are very important to the industry – it drives our regulatory reporting and much of the performance management reporting. This data will continue to play a very important role in the insurance industry and for companies.

However, big data can now enrich that traditional data with new data sources for new insights. In areas such as customer service and product personalization, it can make the difference between cross-selling the right products to meet customer needs and losing the business. For commercial and group carriers, the new data provides the ability to better analyze risk needs, price accordingly and enable superior service in a highly competitive market.

JL: Traditional data will always be around. I doubt that I will outlive a mainframe installation at an insurer; which makes me a little sad. And for many rote tasks like financial reporting, a sales report, or a commission statement, those are sufficient. However, the business of insurance is changing in leaps and bounds. Innovators in data science are interested in correlating those traditional sources to other creative data to find new products, or areas to reduce risk. There is just a lot of data that is either ignored or locked in obscure systems that needs to be brought into the light. This data could be structured or unstructured, it doesn’t matter, and Big Data can assist there.

  1. How does this fit into an overall data management function?

JL: At the end of the day, a Hadoop cluster is another source of data for an insurer. More flexible, more cost effective and higher speed; but yet another data source for an insurer. So that’s one more on top of relational, cubes, content repositories, mainframes and whatever else insurers have latched onto over the years. So if it wasn’t completely obvious before, it should be now. Data needs to be managed. As data moves around the organization for consumption, it is shaped, cleaned, copied and we hope there is governance in place. And the Big Data installation is not exempt from any of these routines. In fact, one could argue that it is more critical to leverage good data management practices with Big Data not only to optimize the environment but also to eventually replace traditional data structures that just aren’t working.

CM: Insurance companies are blending new and old data and looking for the best ways to leverage “all data”. We are witnessing the development of a new generation of advanced analytical applications to take advantage of the volume, velocity, and variety in big data. We can also enhance current predictive models, enriching them with the unstructured information in claim and underwriting notes or diaries along with other external data.

There will be challenges. Insurance companies will still need to make important decisions on how to incorporate the new data into existing data governance and data management processes. The Chief Data or Chief Analytics officer will need to drive this business change in close partnership with IT.

  1. Tell me a little bit about how Informatica and Hortonworks are working together on this?

JL: For years Informatica has been helping our clients to realize the value in their data and analytics. And while enjoying great success in partnership with our clients, unlocking the full value of data requires new structures, new storage and something that doesn’t break the bank for our clients. So Informatica and Hortonworks are on a continuing journey to show that value in analytics comes with strong relationships between the Hadoop distribution and innovative market leading data management technology. As the relationship between Informatica and Hortonworks deepens, expect to see even more vertically relevant solutions and documented ROI for the Informatica/Hortonworks solution stack.

CM: Informatica and Hortonworks optimize the entire big data supply chain on Hadoop, turning data into actionable information to drive business value. By incorporating data management services into the data lake, companies can store and process massive amounts of data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field.

Matching data from internal sources (e.g. very granular data about customers) with external data (e.g. weather data or driving patterns in specific geographic areas) can unlock new revenue streams.

See this video for a discussion on unlocking those new revenue streams. Sanjay Krishnamurthi, Informatica CTO, and Shaun Connolly, Hortonworks VP of Corporate Strategy, share their perspectives.

  1. Do you have any additional comments on the future of data in this brave new world?

CM: My perspective is that, over time, we will drop the reference to “big” or ”small” data and get back to referring simply to “Data”. The term big data has been useful to describe the growing awareness on how the new data types can help insurance companies grow.

We can no longer use “traditional” methods to gain insights from data. Insurers need a modern data architecture to store, process and analyze data—transforming it into insight.

We will see an increase in new market entrants in the insurance industry, and existing insurance companies will improve their products and services based upon the insights they have gained from their data, regardless of whether that was “big” or “small” data.

JL: I’m sure that even now there is someone locked in their mother’s basement playing video games and trying to come up with the next data storage wave. So we have that to look forward to, and I’m sure it will be cool. But, if we are honest with ourselves, we’ll admit that we really don’t know what to do with half the data that we have. So while data storage structures are critical, the future holds even greater promise for new models, better analytical tools and applications that can make sense of all of this and point insurers in new directions. The trend that won’t change anytime soon is the ongoing need for good quality data, data ready at a moment’s notice, safe and secure and governed in a way that insurers can trust what those cool analytics show them.

Please join us for an interactive discussion on March 25th at 10am Pacific Time/ 1pm Eastern Time.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Share
Posted in Big Data, Data Quality, Financial Services, Hadoop | Tagged , , , , | Leave a comment

Building an Impactful Data Governance – One Step at a Time

Let’s face it, building a Data Governance program is no overnight task.  As one CDO puts it:  ”data governance is a marathon, not a sprint”.  Why? Because data governance is a complex business function that encompasses technology, people and process, all of which have to work together effectively to ensure the success of the initiative.  Because of the scope of the program, Data Governance often calls for participants from different business units within an organization, and it can be disruptive at first.

Why bother then?  Given that data governance is complex, disruptive, and could potentially introduce additional cost to a company?  Well, the drivers for data governance can vary for different organizations.  Let’s take a close look at some of the motivations behind data governance program.

For companies in heavily regulated industries, establishing a formal data governance program is a mandate.  When a company is not compliant, consequences can be severe. Penalties could include hefty fines, brand damage, loss in revenue, and even potential jail time for the person who is held accountable for being noncompliance. In order to meet the on-going regulatory requirements, adhere to data security policies and standards, companies need to rely on clean, connected and trusted data to enable transparency, auditability in their reporting to meet mandatory requirements and answer critical questions from auditors.  Without a dedicated data governance program in place, the compliance initiative could become an on-going nightmare for companies in the regulated industry.

A data governance program can also be established to support customer centricity initiative. To make effective cross-sells and ups-sells to your customers and grow your business,  you need clear visibility into customer purchasing behaviors across multiple shopping channels and touch points. Customer’s shopping behaviors and their attributes are captured by the data, therefore, to gain thorough understanding of your customers and boost your sales, a holistic Data Governance program is essential.

Other reasons for companies to start a data governance program include improving efficiency and reducing operational cost, supporting better analytics and driving more innovations. As long as it’s a business critical area and data is at the core of the process, and the business case is loud and sound, then there is a compelling reason for launching a data governance program.

Now that we have identified the drivers for data governance, how do we start?  This rather loaded question really gets into the details of the implementation. A few critical elements come to consideration including: identifying and establishing various task forces such as steering committee, data governance team and business sponsors; identifying roles and responsibilities for the stakeholders involved in the program; defining metrics for tracking the results.  And soon you will find that on top of everything, communications, communications and more communications is probably the most important tactic of all for driving the initial success of the program.

A rule of thumb?  Start small, take one-step at a time and focus on producing something tangible.

Sounds easy, right? Think this is easy?!Well, let’s hear what the real-world practitioners have to say. Join us at this Informatica webinar to hear Michael Wodzinski, Director of Information Architecture, Lisa Bemis, Director of Master Data, Fabian Torres, Director of Project Management from Houghton Mifflin Harcourt, global leader in publishing, as well as David Lyle, VP of product strategy from Informatica to discuss how to implement  a successful data governance practice that brings business impact to an enterprise organization.

If you are currently kicking the tires on setting up data governance practice in your organization,  I’d like to invite you to visit a member-only website dedicated to Data Governance:  http://governyourdata.com/. This site currently has over 1,000 members and is designed to foster open communications on everything data governance. There you will find conversations on best practices, methodologies, frame works, tools and metrics.  I would also encourage you to take a data governance maturity assessment to see where you currently stand on the data governance maturity curve, and compare the result against industry benchmark.  More than 200 members have taken the assessment to gain better understanding of their current data governance program,  so why not give it a shot?

Governyourdata.com

Governyourdata.com

Data Governance is a journey, likely a never-ending one.  We wish you best of the luck on this effort and a joyful ride! We love to hear your stories.

Share
Posted in Big Data, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , , , , , , , | 1 Comment