Category Archives: Data Aggregation

Can You Find True Love Using A Wearable Device?

Wearable Device

Can You Find True Love Using A Wearable Device?

How do you know if you have found ‘true love’?

Biologists and psychologists tell us that when we are struck by cupid’s arrow, our body is reacting to a set of chemicals that are released in the brain that evoke emotions and feelings of lust, attraction and attachment.[1]  When those chemicals are released, our bodies respond.  Our hearts race, blood pumps through our veins, faces flush, body temperatures rise. Some say it feels like electricity is conducting all over the skin. It releases a flood of emotions that may cloud our judgment and may even cause us to make a choice considered unreasonable to others.  Sound familiar?

But what causes our brains to react to one person and not another?  Are we predisposed to how certain people look or smell?  Do our genes play a role in determining an affinity toward a body type or shape?

Pheromone research has shown how sensors in our nose can smell whether or not someone’s immune system compliments our own based on the scent of urine and sweat.  Meaning, if someone has a similar immune deficiency, that individual won’t smell good to us. We are more likely to prefer the smell of someone who has an immune system that is different. Is our genetic code programming our instincts to preselect who we should mate with so our offspring has a higher chance of surviving?

It is probably not surprising that most men are attracted to women with symmetrical faces and hourglass figures. Genetic research hints that men’s predispositions are also based on a genetic code.  There is a correlation between asymmetric facial characteristics and genetic disorders as well as between waist to hip ratios and fertility. Depending on where you are in your stage in life, these characteristics could have a weighting factor in how your brain responds to the smell of the perfect pheromone and how someone appears.  And, some argue it is all influenced by body language, voice tone and actual words used in dialogue.[2]

Psychologists report it takes only two to four minutes to decide if you are falling in love with someone.  Even if you dismiss some or accept all of the possibilities I am presenting, experiencing love is impacted by a variety and intensity of senses, interpretations and emotions combined together in a short period of time. If you are a data nerd like myself, variety, volume and velocity of ‘signals’ begins to sound like a Big Data marketing pitch. This really is an application of predictive analytics using different data types, large volumes of data and real-time decision making algorithms. But, I’m actually more interested in how affective computing, wearable devices and analytics could help determine whether or not what you feel is actually ’true love’ or just a bad case of indigestion.

Affective computing, according to researcher Rosalind Picard,[3] gives a computer the ability to recognize and express emotions, develop that ability and enable it to regulate and utilize emotions. When applied to wearable devices that can listen to how you talk, measure blood pressure, detect changes in heart and respiration rate and even measure electro-dermal responses, is it possible that technology could sense when your body is responding to the chemicals of love?

What about mood rings, you may ask?  Mood rings, the original form of an affective wearable device that grew in popularity in the 1970s changed color based on your mood. Unfortunately, mood rings only change based on body temperature. Through data collection and research, researchers[4]  have shown that physiology patterns cannot be determined by body temperature alone. In order to truly differentiate emotion of, let’s say ‘true love,’ you need to be able to collect multiple physiological signals and detect a pattern using multi-variant pattern recognition algorithms. And, if you only have 2-4 minutes, it pretty much needs to calculate chances of ‘true love’ in real-time to prevent making a life-altering mistake.

The evolution of wearables technology has reached medical grade, allowing parents to detect when their children are about to have an epileptic seizure or are experiencing acute levels of stress.  When tuned to love-seekers’ queues, is it possible that this same technology could send an audio or visual signal to your smart phone alerting you as to whether or not this person is a ‘true love’ candidate? Or glow red if you are in the proximity of someone who is experiencing similar physiological changes?   Maybe this is the next application for match-making companies such as eHarmony or Match.com?

The reality is this. Assuming that the data is clean and accurate, safe from violating any data privacy concerns and truly connected to your physiological signals, wearable device technology that could detect close proximity of ‘true love’ is probably five years out. It is more likely to show up in a popular science fiction film than at an Apple store in the near term. But, when it does, think about how the signal on your smart phone device tells you the proximity of a potential candidate, where a local flower shop is, integrated with facial recognition and Facebook photos and ‘status’ (assuming it is true), with an iTunes recommendation of ‘Love Is In The Air’ by John Paul Young, ‘True Love’ is only 2-4 minutes away.

[1] http://www.bbc.co.uk/science/hottopics/love/

[2] http://www.youramazingbrain.org/lovesex/sciencelove.htm

[3] R. Picard. Affective Computing. Pages 227-239, MIT Press, 2000

[4] Cacioppa and Tassinary (1990)

Share
Posted in Business Impact / Benefits, Data Aggregation, Data Services, Intelligent Data Platform, Real-Time, Wearable Devices | Tagged , , | Leave a comment

The Quality of the Ingredients Make the Dish-Applies to Data Quality

Data_Quality

Data Quality Leads to Other Integrated Benefits

In a previous life, I was a pastry chef in a now-defunct restaurant. One of the things I noticed while working there (and frankly while cooking at home) is that the better the ingredients, the better the final result. If we used poor quality apples in the apple tart, we ended up with a soupy, flavorless mess with a chewy crust.

The same analogy can be applied to Data Analytics. With poor quality data, you get poor results from your analytics projects. We all know that companies that can implement fantastic analytic solutions that can provide near real-time access to consumer trends are the same companies that can do successful targeted marketing campaigns that are of the minute. The Data Warehousing Institute estimates that data quality problems cost U.S. businesses more than $600 billion a year.

The business impact of poor data quality cannot be underestimated. If not identified and corrected early on, defective data can contaminate all downstream systems and information assets, jacking up costs, jeopardizing customer relationships, and causing imprecise forecasts and poor decisions.

  • To help you quantify: Let’s say your company receives 2 million claims per month with 377 data elements per claim. Even at an error rate of .001, the claims data contains more than 754,000 errors per month and more than 9.04 million errors per year! If you determine that 10 percent of the data elements are critical to your business decisions and processes, you still must fix almost 1 million errors each year!
  • What is your exposure to these errors? Let’s estimate the risk at $10 per error (including staff time required to fix the error downstream after a customer discovers it, the loss of customer trust and loyalty and erroneous payouts. Your company’s risk exposure to poor quality claims data is $10 million a year.

Once your company values quality data as a critical resource – it is much easier to perform high-value analytics that have an impact on your bottom line. Start with creation of a Data Quality program. Data is a critical asset in the information economy, and the quality of a company’s data is a good predictor of its future success.

Share
Posted in Business Impact / Benefits, Cloud Data Integration, Customer Services, Data Aggregation, Data Integration, Data Quality, Data Warehousing, Database Archiving, Healthcare, Master Data Management, Profiling, Scorecarding, Total Customer Relationship | Tagged , , , , | 2 Comments

How to Get the Biggest Returns from Your Hadoop and Big Data Investments in 2015

Big Data2014 was the year that Big Data went mainstream from conversations asking “What is Big Data?” to “How do we harness the power of Big Data to solve real business problems”. It seemed like everyone jumped on the Big Data band wagon from new software start-ups offering the “next generation” predictive analytic applications to traditional database, data quality, business intelligence, and data integration vendors, all calling themselves Big Data providers. The truth is, they all play a role in this Big Data movement.

Earlier in 2014, Wikibon estimated the Big Data market is currently on pace to top $50 billion in 2017, which translates to a 38% compound annual growth rate over the six year period from 2011 (the first year Wikibon sized the Big Data market) to 2017. Most of the excitement around Big Data has been around Hadoop as early adopters who experimented with open source versions quickly grew to adopt enterprise-class solutions from companies like Cloudera™, HortonWorks™, MapR™, and Amazon’s RedShift™ to address real-world business problems including: (more…)

Share
Posted in B2B, Big Data, Business Impact / Benefits, Data Aggregation, Hadoop | Tagged , , , | Comments Off on How to Get the Biggest Returns from Your Hadoop and Big Data Investments in 2015

Remembering Big Data Gravity – PART 2

I ended my previous blog wondering if awareness of Data Gravity should change our behavior. While Data Gravity adds Value to Big Data, I find that the application of the Value is under explained.

Exponential growth of data has naturally led us to want to categorize it into facts, relationships, entities, etc. This sounds very elementary. While this happens so quickly in our subconscious minds as humans, it takes significant effort to teach this to a machine.

A friend tweeted this to me last week: I paddled out today, now I look like a lobster. Since this tweet, Twitter has inundated my friend and me with promotions from Red Lobster. It is because the machine deconstructed the tweet: paddled <PROPULSION>, today <TIME>, like <PREFERENCE> and lobster <CRUSTACEANS>. While putting these together, the machine decided that the keyword was lobster. You and I both know that my friend was not talking about lobsters.

You may think that this maybe just a funny edge case. You can confuse any computer system if you try hard enough, right? Unfortunately, this isn’t an edge case. 140 characters has not just changed people’s tweets, it has changed how people talk on the web. More and more information is communicated in smaller and smaller amounts of language, and this trend is only going to continue.

When will the machine understand that “I look like a lobster” means I am sunburned?

I believe the reason that there are not hundreds of companies exploiting machine-learning techniques to generate a truly semantic web, is the lack of weighted edges in publicly available ontologies. Keep reading, it will all make sense in about 5 sentences. Lobster and Sunscreen are 7 hops away from each other in dbPedia – way too many to draw any correlation between the two. For that matter, any article in Wikipedia is connected to any other article within about 14 hops, and that’s the extreme. Completed unrelated concepts are often just a few hops from each other.

But by analyzing massive amounts of both written and spoken English text from articles, books, social media, and television, it is possible for a machine to automatically draw a correlation and create a weighted edge between the Lobsters and Sunscreen nodes that effectively short circuits the 7 hops necessary. Many organizations are dumping massive amounts of facts without weights into our repositories of total human knowledge because they are naïvely attempting to categorize everything without realizing that the repositories of human knowledge need to mimic how humans use knowledge.

For example – if you hear the name Babe Ruth, what is the first thing that pops to mind? Roman Catholics from Maryland born in the 1800s or Famous Baseball Player?

data gravityIf you look in Wikipedia today, he is categorized under 28 categories in Wikipedia, each of them with the same level of attachment. 1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers.

Now imagine how confused a machine would get when the distance of unweighted edges between nodes is used as a scoring mechanism for relevancy.

If I were to design an algorithm that uses weighted edges (on a scale of 1-5, with 5 being the highest), the same search would yield a much more obvious result.

data gravity1895 births [2]| 1948 deaths [2]| American League All-Stars [4]| American League batting champions [4]| American League ERA champions [4]| American League home run champions [4]| American League RBI champions [4]| American people of German descent [2]| American Roman Catholics [2]| Babe Ruth [5]| Baltimore Orioles (IL) players [4]| Baseball players from Maryland [3]| Boston Braves players [4]| Boston Red Sox players [5]| Brooklyn Dodgers coaches [4]| Burials at Gate of Heaven Cemetery [2]| Cancer deaths in New York [2]| Deaths from esophageal cancer [1]| Major League Baseball first base coaches [4]| Major League Baseball left fielders [3]| Major League Baseball pitchers [5]| Major League Baseball players with retired numbers [4]| Major League Baseball right fielders [3]| National Baseball Hall of Fame inductees [5]| New York Yankees players [5]| Providence Grays (minor league) players [3]| Sportspeople from Baltimore [1]| Maryland [1]| Vaudeville performers [1].

Now the machine starts to think more like a human. The above example forces us to ask ourselves the relevancy a.k.a. Value of the response. This is where I think Data Gravity’s becomes relevant.

You can contact me on twitter @bigdatabeat with your comments.

Share
Posted in Architects, Big Data, Cloud, Cloud Data Management, Data Aggregation, Data Archiving, Data Governance, General, Hadoop | Tagged , , , , , , | Leave a comment

BCBS 239 – What Are Banks Talking About?

I recently participated on an EDM Council panel on BCBS 239 earlier this month in London and New York. The panel consisted of Chief Risk Officers, Chief Data Officers, and information management experts from the financial industry. BCBS 239 set out 14 key principles requiring banks aggregate their risk data to allow banking regulators to avoid another 2008 crisis, with a deadline of Jan 1, 2016.  Earlier this year, the Basel Committee on Banking Supervision released the findings from a self-assessment from the Globally Systemically Important Banks (GISB’s) in their readiness to 11 out of the 14 principles related to BCBS 239. 

Given all of the investments made by the banking industry to improve data management and governance practices to improve ongoing risk measurement and management, I was expecting to hear signs of significant process. Unfortunately, there is still much work to be done to satisfy BCBS 239 as evidenced from my findings. Here is what we discussed in London and New York.

  • It was clear that the “Data Agenda” has shifted quite considerably from IT to the Business as evidenced by the number of risk, compliance, and data governance executives in the room.  Though it’s a good sign that business is taking more ownership of data requirements, there was limited discussions on the importance of having capable data management technology, infrastructure, and architecture to support a successful data governance practice. Specifically capable data integration, data quality and validation, master and reference data management, metadata to support data lineage and transparency, and business glossary and data ontology solutions to govern the terms and definitions of required data across the enterprise.
  • With regard to accessing, aggregating, and streamlining the delivery of risk data from disparate systems across the enterprise and simplifying the complexity that exists today from point to point integrations accessing the same data from the same systems over and over again creating points of failure and increasing the maintenance costs of supporting the current state.  The idea of replacing those point to point integrations via a centralized, scalable, and flexible data hub approach was clearly recognized as a need however, difficult to envision given the enormous work to modernize the current state.
  • Data accuracy and integrity continues to be a concern to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Many in the room acknowledged heavy reliance on manual methods implemented over the years and investing in Automating data integration and onboarding risk data from disparate systems across the enterprise is important as part of Principle 3 however, much of what’s in place today was built as one off projects against the same systems accessing the same data delivering it to hundreds if not thousands of downstream applications in an inconsistent and costly way.
  • Data transparency and auditability was a popular conversation point in the room as the need to provide comprehensive data lineage reports to help explain how data is captured, from where, how it’s transformed, and used remains a concern despite advancements in technical metadata solutions that are not integrated with their existing risk management data infrastructure
  • Lastly, big concerns regarding the ability to capture and aggregate all material risk data across the banking group to deliver data by business line, legal entity, asset type, industry, region and other groupings, to support identifying and reporting risk exposures, concentrations and emerging risks.  This master and reference data challenge unfortunately cannot be solved by external data utility providers due to the fact the banks have legal entity, client, counterparty, and securities instrument data residing in existing systems that require the ability to cross reference any external identifier for consistent reporting and risk measurement.

To sum it up, most banks admit they have a lot of work to do. Specifically, they must work to address gaps across their data governance and technology infrastructure.BCBS 239 is the latest and biggest data challenge facing the banking industry and not just for the GSIB’s but also for the next level down as mid-size firms will also be required to provide similar transparency to regional regulators who are adopting BCBS 239 as a framework for their local markets.  BCBS 239 is not just a deadline but the principles set forth are a key requirement for banks to ensure they have the right data to manage risk and ensure transparency to industry regulators to monitor system risk across the global markets. How ready are you?

Share
Posted in Banking & Capital Markets, Data Aggregation, Data Governance, Data Services | Tagged , , , | Leave a comment

The Pros and Cons: Data Integration from the Bottom-Up and the Top-Down

Data Integration from the Bottom-Up and the Top-Down

Data Integration from the Bottom-Up and the Top-Down

What are the first steps of a data integration project?  Most are at a loss.  There are several ways to approach data integration, and your approach depends largely upon the size and complexity of your problem domain.

With that said, the basic approaches to consider are from the top-down, or the bottom-up.  You can be successful with either approach.  However, there are certain efficiencies you’ll gain with a specific choice, and it could significantly reduce the risk and cost.  Let’s explore the pros and cons of each approach.

Top-Down

Approaching data integration from the top-down means moving from the high level integration flows, down to the data semantics.  Thus, you an approach, perhaps even a tool-set (using requirements), and then define the flows that are decomposed down to the raw data.

The advantages of this approach include:

The ability to spend time defining the higher levels of abstraction without being limited by the underlying integration details.  This typically means that those charged with designing the integration flows are more concerned with how they have to deal with the underlying source and target, and this approach means that they don’t have to deal with that issue until later, as they break down the flows.

The disadvantages of this approach include:

The data integration architect does not consider the specific needs of the source or target systems, in many instances, and thus some rework around the higher level flows may have to occur later.  That causes inefficiencies, and could add risk and cost to the final design and implementation.

Bottom-Up

For the most part, this is the approach that most choose for data integration.  Indeed, I use this approach about 75 percent of the time.  The process is to start from the native data in the sources and targets, and work your way up to the integration flows.  This typically means that those charged with designing the integration flows are more concerned with the underlying data semantic mediation than the flows.

The advantages of this approach include:

It’s typically a more natural and traditional way of approaching data integration.  Called “data-driven” integration design in many circles, this initially deals with the details, so by the time you get up to the integration flows there are few surprises, and there’s not much rework to be done.  It’s a bit less risky and less expensive, in most cases.

The disadvantages of this approach include:

Starting with the details means that you could get so involved in the details that you miss the larger picture, and the end state of your architecture appears to be poorly planned, when all is said and done.  Of course, that depends on the types of data integration problems you’re looking to solve.

No matter which approach you leverage, with some planning and some strategic thinking, you’ll be fine.  However, there are different paths to the same destination, and some paths are longer and less efficient than others.  As you pick an approach, learn as you go, and adjust as needed.

Share
Posted in Big Data, Data Aggregation, Data Integration | Tagged , , , | Leave a comment

Once Again, Data Integration Proves Critical to Data Analytics

When it comes to cloud-based data analytics, a recent study by Ventana Research (as found in Loraine Lawson’s recent blog post) provides a few interesting data points.  The study reveals that 40 percent of respondents cited lowered costs as a top benefit, improved efficiency was a close second at 39 percent, and better communication and knowledge sharing also ranked highly at 34 percent.

Ventana Research also found that organizations cite a unique and more complex reason to avoid cloud analytics and BI.  Legacy integration work can be a major hindrance, particularly when BI tools are already integrated with other applications.  In other words, it’s the same old story:

You can’t make sense of data that you can’t see.

Data Integration Proves Critical to Data Analytics

Data Integration is Critical to Data Analytics

The ability to deal with existing legacy systems when moving to concepts such as big data or cloud-based analytics is critical to the success of any enterprise data analytics strategy.  However, most enterprises don’t focus on data integration as much as they should, and hope that they can solve the problems using ad-hoc approaches.

These approaches rarely work as well a they should, if at all.  Thus, any investment made in data analytics technology is often diminished because the BI tools or applications that leverage analytics can’t see all of the relevant data.  As a result, only part of the story is told by the available data, and those who leverage data analytics don’t rely on the information, and that means failure.

What’s frustrating to me about this issue is that the problem is easily solved.  Those in the enterprise charged with standing up data analytics should put a plan in place to integrate new and legacy systems.  As part of that plan, there should be a common understanding around business concepts/entities of a customer, sale, inventory, etc., and all of the data related to these concepts/entities must be visible to the data analytics engines and tools.  This requires a data integration strategy, and technology.

As enterprises embark on a new day of more advanced and valuable data analytics technology, largely built upon the cloud and big data, the data integration strategy should be systemic.  This means mapping a path for the data from the source legacy systems, to the views that the data analytics systems should include.  What’s more, this data should be in real operational time because data analytics loses value as the data becomes older and out-of-date.  We operate a in a real-time world now.

So, the work ahead requires planning to occur at both the conceptual and physical levels to define how data analytics will work for your enterprise.  This includes what you need to see, when you need to see it, and then mapping a path for the data back to the business-critical and, typically, legacy systems.  Data integration should be first and foremost when planning the strategy, technology, and deployments.

Share
Posted in Data Aggregation, Data Integration, Data Integration Platform, Data Quality | Tagged , , , | Leave a comment

The Apple Watch – the Newest Data-First Device

Data First Apple Watch

The Data-First Consumer

I have to admit it: I’m intrigued by the new Apple Watch. I’m not going to go into all the bells and whistles, which Apple CEO Tim Cook describes as a “mile long.” Suffice it to say, that Apple has once again pushed the boundaries of what an existing category can do.

The way I see it, the biggest impact of the Apple Watch will come from how it will finally make data fashionable. For starters, the three Apple Watch models and interchangeable bands will actually make it hip to wear a watch again. But I think the ramifications of this genuinely good-looking watch go well beyond the skin deep. The Cupertino company has engineered its watch and its mobile software to recognize related data and seamlessly share it across relevant apps. And those capabilities allow it to, for instance, monitor our fitness and health, show us where we parked the car, open the door to our hotel room and control our entertainment centers.

Think what this could mean for any company with a Data-First point of view. I like to say that a data-first POV changes everything. With it, companies can unleash the killer app, killer marketing campaign and killer sales organization.

The Apple Watch

The Apple Watch

The Apple Watch finally gives people a reason to have that killer app with them at all times, wherever they are and whatever they’re doing. Looked at a different way, it could unleash a new culture of Data-Only consumers: People who rely on being told what they need to know, in the right context.

But while Apple may the first to push this Data-First POV in unexpected ways, history has shown they won’t be the last. It’s time for every company to tap into the newest fashion accessory, and make data their first priority.

Share
Posted in Data Aggregation, Data Integration | Tagged , | Leave a comment

In a Data First World, Knowledge Really Is Power!

Knowledge Really IS Power!

Knowledge Really IS Power!

I have two quick questions for you. First, can you name the top three factors that will increase your sales or boost your profit? And second, are you sure about that?

That second question is a killer because most people — no matter if they’re in marketing, sales or manufacturing — rely on incomplete, inaccurate or just plain wrong information. Regardless of industry, we’ve been fixated on historic transactions because that’s what our systems are designed to provide us.

“Moneyball: The Art of Winning an Unfair Game” gives a great example of what I mean. The book (not the movie) describes Billy Beane hiring MBAs to map out the factors that would win a baseball game. They discovered something completely unexpected: That getting more batters on base would tire out pitchers. It didn’t matter if batters had multi-base hits, and it didn’t even matter if they walked. What mattered was forcing pitchers to throw ball after ball as they faced an unrelenting string of batters. Beane stopped looking at RBIs, ERAs and even home runs, and started hiring batters who consistently reached first base. To me, the book illustrates that the most useful knowledge isn’t always what we’ve been programmed to depend on or what is delivered to us via one app or another.

For years, people across industries have turned to ERP, CRM and web analytics systems to forecast sales and acquire new customers. By their nature, such systems are transactional, forcing us to rely on history as the best predictor of the future. Sure it might be helpful for retailers to identify last year’s biggest customers, but that doesn’t tell them whose blogs, posts or Tweets influenced additional sales. Isn’t it time for all businesses, regardless of industry, to adopt a different point of view — one that we at Informatica call “Data-First”? Instead of relying solely on transactions, a data-first POV shines a light on interactions. It’s like having a high knowledge IQ about relationships and connections that matter.

A data-first POV changes everything. With it, companies can unleash the killer app, the killer sales organization and the killer marketing campaign. Imagine, for example, if a sales person meeting a new customer knew that person’s concerns, interests and business connections ahead of time? Couldn’t that knowledge — gleaned from Tweets, blogs, LinkedIn connections, online posts and transactional data — provide a window into the problems the prospect wants to solve?

That’s the premise of two startups I know about, and it illustrates how a data-first POV can fuel innovation for developers and their customers. Today, we’re awash in data-fueled things that are somehow attached to the Internet. Our cars, phones, thermostats and even our wristbands are generating and gleaning data in new and exciting ways. That’s knowledge begging to be put to good use. The winners will be the ones who figure out that knowledge truly is power, and wield that power to their advantage.

Share
Posted in Data Aggregation, Data Governance, Data Integration, Data Quality, Data Transformation | Tagged , , | Leave a comment

3 Barriers to Delivering Omnichannel Experiences

 

This blog post initially appeared on CMSwire.com and is reblogged here with their consent.

3 Barriers to Delivering Omnichannel Experiences

Image via Lars Plougmann via CC BY-SA 2.0 license

I was recently searching for fishing rods for my 5-year old son and his friends to use at our neighborhood pond. I know nothing about fishing, so I needed to get educated. First up, a Google search on my laptop at home. Then, I jostled between my phone, tablet and laptop visiting websites, reading descriptions, looking at photos and reading reviews. Offline, I talked to friends and visited local stores recently, searching for fishing rods for my 5-year old son and his friends to use at our neighborhood pond. I know nothing about fishing, so I needed to get educated. First up, a Google search on my laptop at home. Then, I jostled between my phone, tablet and laptop visiting websites, reading descriptions, looking at photos and reading reviews. Offline, I talked to friends and visited local stores.

The product descriptions weren’t very helpful. What is a “practice casting plug”? Turns out, this was a great feature! Instead of a hook, the rod had a rubber fish to practice casting safely. What a missed opportunity for the retailers who didn’t share this information. I bought the fishing rods from the retailer that educated me with valuable product information and offered free three to five day shipping.

What does this mean for companies who sell products across multiple channels?

Virtually everyone is a cross-channel shopper: 95 percent of consumers frequently or at least occasionally shop a retailer’s website and store, according to the “Omni-Channel Insights” study by CFI Group. In the report, “The Omnichannel Opportunity: Unlocking the Power of the Connected Customer,” Deloitte predicts more than 50 percent of in-store purchases will be influenced digitally by the end of 2014.

Because of all this crosschannel activity, a new term is trending: omnichannel

What Does Omnichannel Mean?

Let’s take a look back in time. Retailers started with one channel — the brick-and-mortar store. Then they introduced the catalog and call center. Then they built another channel — e-Commerce. Instead of making it an extension of the brick-and-mortar experience, many implemented an independent strategy, including operations, resources, technology and inventory. Retailers recently started integrating brick-and-mortar and e-Commerce channels, but it’s not always consistent. And now they are building another channel — mobile sites and apps.

Multichannel is a retailer-centric, transaction-focused view of operations. Each channel operates and aims to boost sales independently. Omnichannel is a customer-centric view. The goal is to understand through which channels customers want to engage at each stage of the shopping journey and enable a seamless, integrated and consistent brand experience across channels and devices.

Shoppers expect an omnichannel experience, but delivering it efficiently isn’t easy. Those responsible for enabling an omnichannel experience are encountering barriers. Let’s look at the three barriers most relevant for marketing, merchandising, sales, customer experience and information management leaders.

Barrier #1: Shift from product-centric to customer-centric view

Many retailers focus on how many products are sold by channel. Three key questions are:

  1. How can we drive store sales growth?
  2. How can we drive online sales growth?
  3. What’s our mobile strategy?

This is the old way of running a retail business. The new way is analyzing customer data to understand how they are engaging and transacting across channels.

Why is this difficult? At the Argyle eCommerce Leadership Forum, Vice President of Multichannel at GameStop Corp Jason Allen shared the $8.8 billion video game retailer’s approach to overcoming this barrier. While online represents 3 percent of sales, no one measured how much the online channel was influencing overall business.

They started by collecting customer data for analytics to find out who their customers were and how they interacted with Game Stop online and in 6,600 stores across 15 countries. The analysis revealed customers used multiple channels: 60 percent engaged on the web, and 26 percent of web visitors who didn’t buy online bought in-store within 48 hours.

This insight changed the perception of the online channel as a small contributor. Now they use two metrics to measure performance. While the online channel delivers 3 percent of sales, it influences 22 percent of overall business.

Take Action: Start collecting customer data. Analyze it. Learn who your customers are. Find out how they engage and transact with your business across channels.

Barrier #2: Shift from fragmented customer data to centralized customer data everyone can use

Nikki Baird, Managing Partner at Retail Systems Research (RSR), told me she believes the fundamentals of retail are changing from “right product, right price, right place, right time” to:

  1. Who is my customer?
  2. What are they trying to accomplish?
  3. How can we help?

According to RSR, creating a consistent customer experience remains the most valued capability for retailers, but 54 percent indicated their biggest inhibitor was not having a single view of the customer across channels.

Why is this difficult? A $12 billion specialty retailer known for its relentless focus on customer experience, with 200 stores and an online channel had to overcome this barrier. To deliver a high-touch omnichannel experience, they needed to replace the many views of the customer with one unified customer view. They invested in master data management (MDM) technology and competencies.

2014-17-July-Customer-Information-Challenge.jpg

 

Now they bring together customer, employee and product data scattered across 30 applications (e.g., e-Commerce, POS, clienteling, customer service, order management) into a central location, where it’s managed and shared on an ongoing basis. Employees’ applications are fueled with clean, consistent and connected customer data. They are able to deliver a high-touch omnichannel experience because they can answer important questions about customers and their valuable relationships, such as:

  • Who is this customer and who’s in their household?
  • Who do they buy for, what do they buy, where do they buy?
  • Which employees do they typically buy from in store?

Take Action: Think of the valuable information customers share when they interact with different parts of your business. Tap into it by bridging customer information silos. Bring fragmented customer information together in one central location. Make it universally accessible. Don’t let it remain locked up in departmental applications. Keep it up-to-date. Automate the process of updating customer information across departmental applications.

Barrier #3: Shift from fragmented product data to centralized product data everyone can use

Two-thirds of purchase journeys start with a Google search. To have a fighting chance, retailers need rich and high quality product information to rank higher than the competition.

2014-17-July-Geiger-Image5.pngTake a look at the image on the left. Would you buy this product? Probably not. One-third of shoppers who don’t make a purchase didn’t have enough information to make a purchase decision. What product information does a shopper need to convert in the moment? Rich, high quality information has conversion power.

Consumers return about 40 percent of all fashion and 15 percent of electronics purchases. That’s not good for retailers or shoppers. Minimize costly returns with complete product information so shoppers can make more informed purchase decisions. Jason Allen’s advice is, “Focus less on the cart and check out. Focus more on search, product information and your store locator. Eighty percent of customers are coming to the web for research.”

Why is this difficult? Crestline is a multichannel direct marketing firm selling promotional products through direct mail and e-Commerce. The barrier to quickly bringing products to market and updating product information across channels was fragmented and complex product information. To replace the manual, time consuming spreadsheet process to manage product information, they invested in product information management (PIM) technology.

2014-17-July-Product-Information-Challenge.jpg

Now Crestline’s product introduction and update process is 300 percent more efficient. Because they are 100 percent current on top products and over 50 percent current for all products, the company is boosting margins and customer service.

Take Action: Think about all the product information shoppers need to research and make a decision. Tap into it by bridging product information silos. Bring fragmented product information together in one central location. Make it universally usable, not channel-specific. Keep it up-to-date. Automate the process of publishing product information across channels, including the applications used by customer service and store associates.

Key Takeaways

Delivering an omnichannel experience efficiently isn’t easy. The Game Stop team collected and analyzed customer data to learn more about who their customers are and how they interact with the company. A specialty retailer centralized fragmented customer data. Crestline centralized product information to accelerate their ability to bring products to market and make updates across channels. Which of these barriers are holding you back from delivering an omnichannel experience?

Title image by Lars Plougmann (Flickr) via a CC BY-SA 2.0 license

 

 

Share
Posted in Data Aggregation, Operational Efficiency, Retail | Tagged | Leave a comment