Category Archives: Data Services
Everyone nods deferentially when that old adage is uttered about people being the most valuable asset in the business. Being one myself, I’d like to agree. The problem is I don’t think it’s the case anymore. In many recent conversations with business leaders, there’s growing agreement that ‘people’ have been knocked off the ‘number one importance’ pedestal – the most valuable asset any business has is now its data. The Chief Data Officer (CDO) is in the ascendant.
The rise of the CDO highlights the ever-sharpening focus that organisations, particularly in the Financial Services sector, are placing on their data. CDOs are on the increase. There is wide recognition that data is indeed an asset. CDOs are redefining ways in which the organisation drives itself, its relationships with the outside world, its products, services and future success. A recent Capgemini Report[i] suggests that ‘FS firms need to become information-centric enterprises’ not just because of proliferating regulatory reporting demands, but also due to ‘unparalleled competition for customer assets and allegiance.’
Given that data is an asset, it must be treated like one. It has to be protected, maintained, and sustained. Adhering to these disciplines will increase its value and produce the return for the business that well curated and managed data is eminently capable of doing. The only challenge then to be overcome is how quickly the business can unlock the return. The CDO has control of the most valuable asset in the business. At Informatica we believe that he or she will only be in control if a governance and management ecosystem is built on three fundamental pillars:
- Securing the data: This is about more than simply the value of the information asset, and the statutes and legal responsibilities that surround it – it’s about the immeasurable damage to an organisation that can come about if its data is accessed by outsiders;
- Knowing where the data is: It sounds like an obvious requirement but I encounter many organisations that cannot locate all their data, all the time. It has often been accumulated by different departments in numerous locations and divergent formats. It is often widely dispersed with no central view or control because it has grown organically. As a result it is under-utilised. Retrieving it wastes time and money and creates inefficiencies;
- Refining it: Data needs careful and diligent attention to ensure it is always up-to-date and relevant. It needs always to tell the full story. Users must have confidence and trust in the data – knowing they can simply draw down on it and put it to profitable use, in the knowledge that it is optimal, constantly. Like many other assets, data is a raw material which must be refined to be of value.
In a world where technology simplification is the right of passage for kings, how will CDOs rise above the status quo and seize this opportunity with a firm grasp? The question is, how evolved are your foundational pillars?
I mentioned that the CDO is in the ascendancy. Watch this space. I predict the rise in significance of the role within firms is going to be stratospheric. It will rise above all others as organisations recognise that they are data-driven, data-defined, data-centric and ultimately data-dependant. I see no valid reason why CDOs will not only be at the top table but will get the comfiest seat alongside their CEO.
In my last blog, The New Insurance Model, I argued the case for a new design point for company systems and capabilities, where IT architecture should be wrapped around Data First principles. I suggest now that this approach needs to go further, deeper, and wider. The winning FS firm of the future will be one that adopts a Data First Business Architecture – full recognition that the business thrives on the basis of its data. The CDO has arrived, and long may he or she reign.
[i] The Role of the Chief Data Officer in Financial Services
On Saturday, I got a call from my broadband company on my mobile phone. The sales rep pitched a great limited-time offer for new customers. I asked him whether I could take advantage of this great offer as well, even though I am an existing customer. He was surprised. “Oh, you’re an existing customer,” he said, dismissively. “No, this offer doesn’t apply to you. It’s for new customers only. Sorry.” You can imagine my annoyance.
If this company had built a solid foundation of customer data, the sales rep would have had a customer profile rich with clean, consistent, and connected information as reference. If he had visibility into my total customer relationship with his company, he’d know that I’m a loyal customer with two current service subscriptions. He’d know that my husband and I have been customers for 10 years at our current address. On top of that, he’d know we both subscribed to their services while live at separate addresses before we were married.
Unfortunately, his company didn’t arm him with the great customer data he needs to be successful. If they had, he could have taken the opportunity to offer me one of the four services I currently don’t subscribe to—or even a bundle of services. And I could have shared a very different customer experience.
Every customer interaction counts
Executives at companies of all sizes talk about being customer-centric, but it’s difficult to execute on that vision if you don’t manage your customer data like a strategic asset. If delivering seamless, integrated, and consistent customer experiences across channels and touch points is one of your top priorities, every customer interaction counts. But without knowing exactly who your customers are, you cannot begin to deliver the types of experiences that retain existing customers, grow customer relationships and spend, and attract new customers.
How would you rate your current ability to identify your customers across lines of business, channels and touch points?
Many businesses, however, have anything but an integrated and connected customer-centric view—they have a siloed and fragmented channel-centric view. In fact, sales, marketing, and call center teams often identify siloed and fragmented customer data as key obstacles preventing them from delivering great customer experiences.
According to Retail Systems Research, creating a consistent customer experience remains the most valued capability for retailers, but 55 % of those surveyed indicated their biggest inhibitor was not having a single view of the customer across channels.
Retailers are not alone. An SVP of marketing at a mortgage company admitted in an Argyle CMO Journal article that, now that his team needs to deliver consistent customer experiences across channels and touch points, they realize they are not as customer-centric as they thought they were.
Customer complexity knows no bounds
The fact is, businesses are complicated, with customer information fragmented across divisions, business units, channels, and functions.
Citrix, for instance, is bringing together valuable customer information from 4 systems. At Hyatt Hotels & Resorts, it’s about 25 systems. At MetLife, it’s 70 systems.
How many applications and systems would you estimate contain valuable customer information at your company?
Based on our experience working with customers across many industries, we know the total customer relationship allows:
- Marketing to boost response rates by better segmenting their database of contacts for personalized marketing offers.
- Sales to more efficiently and effectively cross-sell and up-sell the most relevant offers.
- Customer service teams to resolve customers’ issues immediately, instead of placing them on hold to hunt for information in a separate system.
If your marketing, sales, and customer service teams are struggling with inaccurate, inconsistent, and disconnected customer information, it is costing your company revenue, growth, and success.
Transforming customer data into total customer relationships
Informatica’s Total Customer Relationship Solution fuels business and analytical applications with clean, consistent and connected customer information, giving your marketing, sales, e-commerce and call center teams access to that elusive total customer relationship. It not only brings all the pieces of fragmented customer information together in one place where it’s centrally managed on an ongoing basis, but also:
- Reconciles customer data: Your customer information should be the same across systems, but often isn’t. Assess its accuracy, fixing and completing it as needed—for instance, in my case merging duplicate profiles under “Jakki” and “Jacqueline.”
- Reveals valuable relationships between customers: Map critical connections—Are individuals members of the same household or influencer network? Are two companies part of the same corporate hierarchy? Even link customers to personal shoppers or insurance brokers or to sales people or channel partners.
- Tracks thorough customer histories: Identify customers’ preferred locations; channels, such as stores, e-commerce, and catalogs; or channel partners.
- Validates contact information: Ensure email addresses, phone numbers, and physical addresses are complete and accurate so invoices, offers, or messages actually reach customers.
This is just the beginning. From here, imagine enriching your customer profiles with third-party data. What types of information help you better understand, sell to, and serve your customers? What are your plans for incorporating social media insights into your customer profiles? What could you do with this additional customer information that you can’t do today?
We’ve helped hundreds of companies across numerous industries build a total customer relationship view. Merrill Lynch boosted marketing campaign effectiveness by 30 percent. Citrix boosted conversion rates by 20%. A $60 billion global manufacturer improved cross-sell and up-sell success by 5%. A hospitality company boosted cross-sell and up-sell success by 60%. And Logitech increased sales across channels, including their online site, retail stores, and distributors.
Informatica’s Total Customer Relationship Solution empowers your people with confidence, knowing that they have access to the kind of great customer data that allows them to surpass customer acquisition and retention goals by providing consistent, integrated, and seamless customer experiences across channels. The end result? Great experiences that customers are inspired to share with their family and friends at dinner parties and on social media.
Do you have a terrible customer experience or great customer experience to share? If so, please share them with us and readers using the Comment option below.
First – let’s start off with a description of what exactly Big Data is…simply put: lots and lots of data. According to Wikipedia: “Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, curation, search, sharing, storage, transfer, visualization, and information privacy. The term often refers simply to the use of predictive analytics or other certain advanced methods to extract value from data, and seldom to a particular size of data set.”
There are many different sources of data (claims systems, enrollment systems, benefits administration systems, survey results, consumer data, social media, personal health devices – like fitbit). Each source generates an amazing amount of data. These data sets grow in size because they are being gathered by readily available and numerous information-sensing mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers, and wireless sensor networks. The world’s technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; as of 2012, every day 2.5 exabytes (2.5×1018) of data were created. In order to make sense of all of this data, we need to be able to organize it, create linkages between the data and then perform analysis on the data in order to provide meaningful actions.
In 2000, Seisint Inc. developed C++ as a distributed file sharing framework for data storage and querying to support the vast amount of storage that is necessary for this data. With this framework, structured, semi-structured and/or unstructured data can be stored and distributed across multiple servers.
In 2004, Google published a paper on a process called MapReduce that uses the distributed file sharing framework. The MapReduce framework provides a parallel processing model and associated implementation to process huge amount of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the Map step). The results are then gathered and delivered (the Reduce step). The framework was very successful, so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open source project named Hadoop.
With Hadoop, payers have the ability to store a vast amount of data at a fairly inexpensive price point. By distributing the framework, access to the data can happen in a timely manner and payers are able to interact effectively with their distributed data.
Within the Healthcare Payer market, there are a lot of potential use cases for Hadoop or big data. Once the data is stored, linked and relationships between the data are created – some of the benefits we anticipate include:
- Re-Admission Risk Analysis– One of the key predictors of re-admission rates is whether or not the patient has someone to help them at home. The ability to determine household information (through relationships in member data, for example addresses and care team relationships available within a master data management solution populated with data from a Hadoop cluster) would be very helpful to identify at risk patients and provide targeted care post discharge. Data from social media outlets can provide quite a bit of household information.
- STARS Rating Improvement -In addition to missed care management plans/drug adherence, another interesting thing that could be better aligned is the member/provider link. Perhaps one specific provider is more successful at getting patients to adhere to Diabetes management protocols, while another provider is not very successful at getting hip replacement patients to complete physical therapy. Being able to link the patient to the provider along with the clinical data can help identify where to focus remediation efforts for possibly modifying provider or member behavior.
- Member Engagement -Taking householding further, putting information from re-admission risk analysis to work – once payers are able to household a group of members and link the household to a specific address – payers might be able to better predict how a new member in the same physical location might behave – and then you could target your outreach to the new members from the beginning utilizing effective engagement methodologies that have been successful for that physical location in the past.
In order to create the household, or determine how a member feels about a provider (which can then impact how they adhere to treatment plans) or understand how neighborhoods (which are groupings of households) may engage with their providers, payers need access to a vast amount of data. They also need to be able to sift through this data efficiently to create the relationship links as quickly as possible. Sifting through the data is enabled with Hadoop and Big Data. Relating the data can be done with master data management (which I will talk about next).
Where is the best place to get started on a Big Data solution? The Big, Big Data Workbook addresses:
- How to choose the right big data project and make it bulletproof from the start– setting clear business and IT objectives, defining metrics that prove your project’s value, and being strategic about datasets, tools and hand-coding.
- What to consider when building your team and data governance framework– making the most of existing skills, thinking strategically about the composition of the team, and ensuring effective communication and alignment of the project goals.
- How to ensure your big data supply chain is lean and effective– establishing clear, repeatable, scalable, and continuously improving processes, and a blueprint for building the ideal big data technology and process architecture
At the recent Bosch Connected World conference in Berlin, Stefan Bungart, Software Leader Europe at GE, presented a very interesting keynote, “How Data Eats the World”—which I assume refers to Marc Andreesen’s statement that “Software eats the world”. One of the key points he addressed in his keynote was the importance of generating actionable insight from Big Data, securely and in real-time at every level, from local to global and at an industrial scale will be the key to survival. Companies that do not invest in DATA now, will eventually end up like consumer companies which missed the Internet: It will be too late.
As software and the value of data are becoming a larger part of the business value chain, the lines between different industries become more vague, or as GE’s Chairman and CEO Jeff Immelt once stated: “If you went to bed last night as an industrial company, you’re going to wake up today as a software and analytics company.” This is not only true for an industrial company, but for many companies that produce “things”: cars, jet-engines, boats, trains, lawn-mowers, tooth-brushes, nut-runners, computers, network-equipment, etc. GE, Bosch, Technicolor and Cisco are just a few of the industrial companies that offer an Internet of Things (IoT) platform. By offering the IoT platform, they enter domains of companies such as Amazon (AWS), Google, etc. As Google and Apple are moving into new areas such as manufacturing cars and watches and offering insurance, the industry-lines are becoming blurred and service becomes the key differentiator. The best service offerings will be contingent upon the best analytics and the best analytics require a complete and reliable data-platform. Only companies that can leverage data will be able to compete and thrive in the future.
The idea of this “servitization” is that instead of selling assets, companies offer service that utilizes those assets. For example, Siemens offers a service for body-scans to hospitals instead of selling the MRI scanner, Philips sells lightning services to cities and large companies, not the light bulbs. These business models enable suppliers to minimize disruption and repairs as this will cost them money. Also, it is more attractive to have as much functionality of devices in software so that upgrades or adjustments can be done without replacing physical components. This is made possible by the fact that all devices are connected, generate data and can be monitored and managed from another location. The data is used to analyse functionality, power consumption, usage , but also can be utilised to predict malfunction, proactive maintenance planning, etc.
So what impact does this have on data and on IT? First of all, the volumes are immense. Whereas the total global volume of for example Twitter messages is around 150GB, ONE gas-turbine with around 200 sensors generates close to 600GB per day! But according to IDC only 3% of potentially useful data is tagged and less than 1% is currently analysed. Secondly, the structure of the data is now always straightforward and even a similar device is producing different content (messages) as it can be on a different software level. This has impact on the backend processing and reliability of the analysis of the data.
Also the data often needs to put into context with other master data from thea, locations or customers for real-time decision making. This is a non-trivial task. Next, Governance is an aspect that needs top-level support. Questions like: Who owns the data? Who may see/use the data? What data needs to be kept or archived and for how long? What needs to be answered and governed in IoT projects with the same priorities as the data in the more traditional applications.
To summarize, managing data and mastering data governance is becoming one of the most important pillars of companies that lead the digital age. Companies that fail to do so will be at risk for becoming a new Blockbuster or Kodak: companies that didn’t adopt quickly enough. In order to avoid this, companies need to evaluate a data platform can support a comprehensive data strategy which encapsulates scalability, quality, governance, security, ease of use and flexibility, and that enables them to choose the most appropriate data processing infrastructure, whether that is on premise or in the cloud, or most likely a hybrid combination of these.
The emergence of the business cloud is making the need for data ever more prevalent. Whatever your business, if your role is in the sales, marketing or service departments, chances are your productivity depends a great deal on the ability to move data quickly in and out of Salesforce and its ecosphere of applications.
With the in-built data transformation intelligence, the Data Wizard (click here to try the Beta version), changes the landscape of what traditional data loaders can do. The Data Wizard takes care of the following aspects, so that you don’t have to:
- Data Transformations: We built in over 300 standard data transformations so you don’t have to format the data before bringing it in (eg. combining first and last names into full names, adding numeric columns for totals, splitting address fields into its separate components).
- Built-in intelligence: We automate the mapping of data into Salesforce for a range of common use cases (eg., Automatically mapping matching fields, intelligently auto-generating date format conversions , concatenating multiple fields).
- App-to-app integration: We incorporated pre-built integration templates to encapsulate the logic required for integrating Salesforce with other applications (eg., single click update of customer addresses in a Cloud ERP application based on Account addresses in Salesforce) .
Unlike the other data loading apps out there, the Data Wizard doesn’t presuppose any technical ability on the part of the user. It was purpose-built to solve the needs of every type of user, from the Salesforce administrator to the business analyst.
Despite the simplicity the Data Wizard offers, it is built on the robust Informatica Cloud integration platform, providing the same reliability and performance that is key to the success of Informatica Cloud’s enterprise customers, who integrate over 5 billion rows of data per day. We invite you to try the Data Wizard for free, and contribute to the Beta process by providing us with your feedback.
In case you haven’t noticed, data integration is all the rage right now. Why? There are three major reasons for this trend that we’ll explore below, but a recent USA Today story focused on corporate data as a much more valuable asset than it was just a few years ago. Moreover, the sheer volume of data is exploding.
For instance, in a report published by research company IDC, they estimated that the total count of data created or replicated worldwide in 2012 would add up to 2.8 zettabytes (ZB). By 2020, IDC expects the annual data-creation total to reach 40 ZB, which would amount to a 50-fold increase from where things stood at the start of 2010.
But the growth of data is only a part of the story. Indeed, I see three things happening that drive interest in data integration.
First, the growth of cloud computing. The growth of data integration around the growth of cloud computing is logical, considering that we’re relocating data to public clouds, and that data must be synced with systems that remain on-premise.
The data integration providers, such as Informatica, have stepped up. They provide data integration technology that can span enterprises, managed service providers, and clouds that dealing with the special needs of cloud-based systems. Moreover, at the same time, data integration improves the ways we doing data governance, and data quality,
Second, the growth of big data. A recent IDC forecast shows that the big data technology and services market will grow at a 26.4% compound annual growth rate to $41.5 billion through 2018, or, about six times the growth rate of the overall information technology market. Additionally, by 2020, IDC believes that line of business buyers will help drive analytics beyond its historical sweet spot of relational to the double-digit growth rates of real-time intelligence and exploration/discovery of the unstructured worlds.
The world of big data razor blades around data integration. The more that enterprises rely on big data, and the more that data needs to move from place to place, the more a core data integration strategy and technology is needed. That means you can’t talk about big data without talking about big data integration.
Data integration technology providers have responded with technology that keeps up with the volume of data that moves from place to place. As linked to the growth of cloud computing above, providers also create technology with the understanding that data now moves within enterprises, between enterprises and clouds, and even from cloud to cloud. Finally, data integration providers know how to deal with both structured and unstructured data these days.
Third, better understanding around the value of information. Enterprise managers always knew their data was valuable, but perhaps they did not understand the true value that it can bring.
With the growth of big data, we now have access to information that helps us drive our business in the right directions. Predictive analytics, for instance, allows us to take years of historical data and determine patterns that allow us to predict the future. Mashing up our business data with external data sources makes our data even more valuable.
Of course, data integration drives much of this growth. Thus the refocus on data integration approaches and tech. There are years and years of evolution still ahead of us, and much to be learned from the data we maintain.
Great email marketing has the best ROI in the business, most marketers know. The return on investment can be nearly $40 for every $1 spent, according to Adobe Systems.
Despite an onslaught of new marketing technologies, including the growth of social media marketing and mobile applications, the big news from a just-released report is that the importance of email marketing continues to grow.
60% of marketers in a new survey said that email is a critical enabler of products and services, and 20% said it was the primary revenue source for their business. These findings come from the 2015 State of Marketing report from Salesforce Marketing Cloud.
In the same survey, nearly 3 out of 4 marketers agreed that email marketing is core to their business. However, as any data-driven marketer knows, the real proof is in the measurement of performance.
An example of measured email marketing success is the astonishing revenue growth at global women’s apparel retailer BCBG. In the last year, the major global brand re-focused its email marketing efforts with tactics that improve the customer experience both on digital platforms and in-store.
By putting the customer at the center of its strategy, BCBG’s revenue from email marketing grew 20% in just one quarter, according to Direct Marketing News.
Email marketers can be eager to build templates, opted-in contact lists, establish campaigns, content, frequency, timing, and metrics goals. All of these are important to an email marketer.
BCBG does all of this too – but they also know the value of being customer-ready in their engagement and communication.
What email marketers may not know is that the customer journey begins with great contact data. If you aren’t measuring the quality of your contact data, your email campaigns will not reach their potential.
Here are a few ‘secrets’ about email marketing and contact data that you don’t hear about often, but every data-driven marketer should know.
Up to 30 percent of your contacts’ email addresses change each year. What’s that hissing noise? That’s the air being let out of your email marketing plan. Everyone talks about the best practices for acquiring new contacts for your email lists, but what about taking care of the lists you have? It’s essential for marketers to validate their lists on a regular basis.
This can be done via email verification software and asking your list to opt back in on a regular basis. Asking for an opt-in after someone has been on your list for as long as a year (or more, depending on your audience) helps you know your message is welcome (reducing the chance it will be marked as spam) and re-establishes you in your customers’ minds.
Poor sender reputation can affect anyone and everyone who sends emails in bulk. Having a low sender reputation is similar to having a low credit score. If your credit score is low, you will be limited on what you can do in the financial world. In the same way, a low sender reputation limits what you can do with your email marketing campaigns.
This is a threat that you want to deal with before it happens. Many marketers only learn about the consequences after it happens – and they can no longer reach their audiences. Sales and customer satisfaction suffer as a result, and it can be a costly and time-consuming process to repair your sender reputation.
What are some of the things that affect sender reputation the most?
- Sending to email addresses that are no longer in use.
- Complaint rates (being marked as spam).
- Spam traps (email addresses created specifically to catch spammers). As Return Path has found, sending to even one spam trap can destroy your sender reputation.
You can check email lists before you send to them – in fact, that is the best time to do that. But it’s not enough to verify a list once. Verify email addresses more than once over time, as email addresses that previously were fine one day can become invalid or malicious.
Urgent changes are ahead for email marketers. A major new prediction from Gartner is that companies in all industries will have to primarily compete on customer experience by 2016.
Email marketing holds great potential as a revenue driver, and will continue to be an important channel for providing a great customer experience. 33% of customers surveyed said that email is the best method for building brand loyalty, according to Salesforce Marketing Cloud.
Read more about how customer experience is becoming a bigger part of marketers’ jobs in this new white paper, “The Secret to a Successful Customer Journey.”
Last fall, at a large industry conference, I had the opportunity to conduct a series of discussions with industry leaders in a portable video studio set up in the middle of the conference floor. As part of our exercise, we had a visual artist do freeform storyboarding of the discussion on large swaths of five-foot by five-foot paper, which we then reviewed at the end of the session. For example, in a discussion of cloud computing, the artist drew a rendering of clouds, raining data on a landscape below, illustrated by sketches of office buildings. At a glance, one could get a good read of where the discussion went, and the points that were being made.
Data visualization is one of those up-and-coming areas that has just begin to breach the technology zone. There are some powerful front-end tools that help users to see, at a glance, trends and outliers through graphical representations – be they scattergrams, histograms or even 3D diagrams or something else eye-catching. The “Infographic” that has become so popular in recent years is an amalgamation of data visualization and storytelling. The bottom line is technology is making it possible to generate these representations almost instantly, enabling relatively quick understanding of what the data may be saying.
The power that data visualization is bringing organizations was recently explored by Benedict Carey in The New York Times, who discussed how data visualization is emerging as the natural solution to “big data overload.”
This is much more than a front-end technology fix, however. Rather, Carey cites a growing body of knowledge emphasizing the development of “perceptual learning,” in which people working with large data sets learn to “see” patterns and interesting variations in the information they are exploring. It’s almost a return of the “gut” feel for answers, but developed for the big data era.
As Carey explains it:
“Scientists working in a little-known branch of psychology called perceptual learning have shown that it is possible to fast-forward a person’s gut instincts both in physical fields, like flying an airplane, and more academic ones, like deciphering advanced chemical notation. The idea is to train specific visual skills, usually with computer-game-like modules that require split-second decisions. Over time, a person develops a ‘good eye’ for the material, and with it an ability to extract meaningful patterns instantaneously.”
Video games may be leading the way in this – Carey cites the work of Dr. Philip Kellman, who developed a video-game-like approach to training pilots to instantly “read” instrument panels as a whole, versus pondering every gauge and dial. He reportedly was able to enable pilots to absorb within one hour what normally took 1,000 hours of training. Such perceptual-learning based training is now employed in medical schools to help prospective doctors become familiar with complicated procedures.
There are interesting applications for business, bringing together a range of talent to help decision-makers better understand the information they are looking at. In Carey’s article, an artist was brought into a medical research center to help scientists look at data in many different ways – to get out of their comfort zones. For businesses, it means getting away from staring at bars and graphs on their screens and perhaps turning data upside down or inside-out to get a different picture.
For those hoping to push through a hard-hitting analytics effort that will serve as a beacon of light within an otherwise calcified organization, there’s probably a lot of work cut out for you. Evolving into an organization that fully grasps the power and opportunities of data analytics requires cultural change, and this is a challenge organizations have only begin to grasp.
“Sitting down with pizza and coffee could get you around can get around most of the technical challenges,” explained Sam Ransbotham, Ph.D, associate professor Boston College, at a recent panel webcast hosted by MIT Sloan Management Review, “but the cultural problems are much larger.”
That’s one of the key takeaways from a the panel, in which Ransbotham was joined by Tuck Rickards, head of digital transformation practice at Russell Reynolds Associates, a digital recruiting firm, and Denis Arnaud, senior data scientist Amadeus Travel Intelligence. The panel, which examined the impact of corporate culture on data analytics, was led by Michael Fitzgerald, contributing editor at MIT Sloan Management Review.
The path to becoming an analytics-driven company is a journey that requires transformation across most or all departments, the panelists agreed. “It’s fundamentally different to be a data-driven decision company than kind of a gut-feel decision-making company,” said Rickards. “Acquiring this capability to do things differently usually requires a massive culture shift.”
That’s because the cultural aspects of the organization – “the values, the behaviors, the decision making norms and the outcomes go hand in hand with data analytics,” said Ransbotham. “It doesn’t do any good to have a whole bunch of data processes if your company doesn’t have the culture to act on them and do something with them.” Rickards adds that bringing this all together requires an agile, open source mindset, with frequent, open communication across the organization.
So how does one go about building and promoting a culture that is conducive to getting the maximum benefit from data analytics? The most important piece is being about people who ate aware and skilled in analytics – both from within the enterprise and from outside, the panelists urged. Ransbotham points out that it may seem daunting, but it’s not. “This is not some gee-whizz thing,” he said. “We have to get rid of this mindset that these things are impossible. Everybody who has figured it out has figured it out somehow. We’re a lot more able to pick up on these things that we think — the technology is getting easier, it doesn’t require quite as much as it used to.”
The key to evolving corporate culture to becoming more analytics-driven is to identify or recruit enlightened and skilled individuals who can provide the vision and build a collaborative environment. “The most challenging part is looking for someone who can see the business more broadly, and can interface with the various business functions –ideally, someone who can manage change and transformation throughout the organization,” Rickards said.
Arnaud described how his organization – an online travel service — went about building an espirit de corps between data analytics staff and business staff to ensure the success of their company’s analytics efforts. “Every month all the teams would do a hands-on workshop, together in some place in Europe [Amadeus is headquartered in Madrid, Spain].” For example, a workshop may focus on a market analysis for a specific customer, and the participants would explore the entire end-to-end process for working with the customer, “from the data collection all the way through to data acquisition through data crunching and so on. The one knowing the data analysis techniques would explain them, and the one knowing the business would explain that, and so on.” As a result of these monthly workshops, business and analytics teams members have found it “much easier to collaborate,” he added.
Web-oriented companies such as Amadeus – or Amazon and eBay for that matter — may be paving the way with analytics-driven operations, but companies in most other industries are not at this stage yet, both Rickards and Ransbotham point out. The more advanced web companies have built “an end-to-end supply chain, wrapped around customer interaction,” said Rickards. “If you think of most traditional businesses, financial services or automotive or healthcare are a million miles away from that. It starts with having analytic capabilities, but it’s a real journey to take that capability across the company.”
The analytics-driven business of the near future – regardless of industry – will likely to be staffed with roles not seen as of yet today. “If you are looking to re-architect the business, you may be imagining roles that you don’t have in the company today,” said Rickards. Along with the need for chief analytics officers, data scientists, and data analysts, there will be many new roles created. “If you are on the analytics side of this, you can be in an analytics group or a marketing group, with more of a CRM or customer insights title. Yu can be in a planning or business functions. In a similar way on the technology side, there are people very focused on architecture and security.”
Ultimately, the demand will be for leaders and professionals who understand both the business and technology sides of the opportunity, Rickards continued. Ultimately, he added, “you can have good people building a platform, and you can have good data scientists. But you better have someone on the top of that organization knowing the business purpose.’
Make it about me
I know I’m not alone in feeling unimportant when I contact large organisations and find they lack the customer view we’re all being told we can expect in a digital, multichannel age. I have to pro-act to get things done. I have to ask my insurance provider, for example, if my car premium reflects my years of loyalty, or if I’m due a multi-policy discount.
The time has come for insurers to focus on how they use data for true competitive advantage and customer loyalty. In this void, with a lack of tailored service, I will continue to shop around for something better. It doesn’t have to be like this.
Know data – no threat
A new report from KPMG, Transforming Insurance: Securing competitive advantage (download the pdf here) explores the viable use of data for predictable analytics in insurance. The report finds that almost two thirds of insurer respondents to its survey only use analytics for reporting what happened, rather than for driving future customer interactions. This is a process that tends to take place in distinct data silos, focused on an organisation’s internal business divisions, rather than on customer engagements.
The report missed a critical point. The discussion for insurers is not around data analytics – to an extent they do that already. The focus needs to shift quickly to understanding the data they already have and using it to augment their capabilities. ‘Transformation’ is a huge step. ‘Augmentation’ can be embarked on with no delay and at relatively low costs. It will keep insurers ahead of new market threats.
New players have no locked-down idea about how insurance models should work, but they do recognise how to identify customer needs through the data their customers freely provide. Tesco made a smooth transition from Club Card to insurance provider because it had the data necessary to market the propositions its customers needed. It knew a lot about them. What is there to stop other data-driven organisations like Amazon, Google, and Facebook from entering the market? The barrier for entry has never been lower, and those with a data-centric understanding of their customers are poised to scramble over it.
Changing the design point – thinking data first
There is an immediate strategic need for the insurance sector to view data as more than functional – information to define risk categories and set premiums. In the light of competitive threats, the insurance industry has to recognise and harness the business value of the vast amounts of data it has collected and continues to gather. A new design point is needed – one that creates a business architecture which thinks Data First.
To adopt a data first architecture is to augment the capabilities a company already has. The ‘nirvana’ business model for the insurer is to expand customer propositions beyond the individual (party, car, house, health, annuity) to the household (similar profiles, easier profiling). Based on the intelligent use of data, policy-centric grows to customer-centricity, with a viable evolution path to household-centricity, untied to legacy limitations.
Win back the customer
Changing the data architecture is a pragmatic solution to a strategic problem. By putting data first, insurers can find the golden nuggets already sitting in their systems. They can make the connections across each customer’s needs and life-stage. By trusting the data, insurers can elevate the quality of their customer service to a level of real personal care, enabling them to secure the loyalty of their customers before the market starts to rumble as new players make their pitch.
Focusing on a data architecture, the organisation also takes complexity out of the eco-system and creates headroom for innovation – fresh ideas around cross-sell and up-sell, delivering more complete and loyalty-generating service offerings to customers. Loyalty fosters trust, driving stronger relationships between insurer and client.
Insurers have the power – they have the data – to ensure that when next time someone like me makes contact they can impress me, sell me more, make me happier and, above all, make me stay.