Category Archives: Uncategorized
Every company wants to see a “time to market improvement.” The wisest companies know this is only possible once you’ve mastered your internal data. One such company is Saint-Gobain, a Netherlands-based distributor of building materials. Saint-Gobain has accelerated and enhanced their customer’s multichannel experience using Informatica Product Information Management (PIM). Using Informatica PIM, Saint-Gobain has unleashed the potential of their information in the following ways:
- Ecommerce product introduction: Before using Informatica PIM, it took about one week to update a product to the website – now it is done within a few minutes.
- Everywhere commerce: The mobile app helps construction workers, on-site, to learn the details and stock availability of nearly 100,000 products and parts.
- Cross-selling: In addition to selling roof tiles, Saint-Gobain is also offering additional materials and tools as an up-sell.
- Retail stores: In addition to direct distribution, St. Gobain also sells through retailers. These specialty retailers need to create individual customer quotes which contain potential cross-sell and up-sell items. With Informatica PIM the retailers can create these custom quotes more effectively.
In the video below, Ron Kessels, Saint-Gobain’s Deputy Director of E-Business, talks about how they bring products to market more quickly while creating more opportunities for up-selling building supplies.
If you’d like to learn how your retail business can bring products to market more quickly, consider attending the Informatica World 2014 RETAIL PATH. This collection of sessions will show how to create unique customer experiences with relevant information, analytics, and relationships between data and people. In addition, the pre-conference MDM Day offers a track on “Omnichannel Commerce and Supplier Optimization”.
Arkady, you recently came back from the National Retail Federation conference. What are some of the issues that retailers are struggling with these days?
Arkady Kleyner: There are some interesting trends happening right now in retail. Amazon’s presence is creating a lot of disruption which is pushing traditional retailers to modernize their customer experience strategies. For example, most Brick and Mortar retailers have a web presence, but they’re realizing that web presence can’t just be a second arm to their business. To succeed, they need to integrate their web presence with their stores in a very intimate way. To make that happen, they really have to peel back the onion down to the fundamentals of how product data is shared and managed.
In the good old days, Brick and Mortar retailers could live with a somewhat disconnected product catalog, because they were always ultimately picking from physical goods. However in an integrated Web and Brick & Mortar environment, retailers must be far more accurate in their product catalog. The customers entire product selection process may happen on-line but then picked up at the store. So you can see where retailers need to be far more disciplined with their product data. This is really where a Product Information Management tool is critical, with so many SKUs to manage, retailers really need a process that makes sense from end to end for onboarding and communicating a product to the customer. And that is at the foundation of building an integrated customer experience.
In times of the digital customer, being online and connected always, we announced “commerce relevancy” as the next era of omnichannel and tailoring sales and marketing better to customers. What information are you seeing to be important when creating better customer shopping experience?
Arkady Kleyner:This is another paradigm in the integrated customer experience that retailers are trying to get their heads around. To appreciate how involved this is, just consider what a company like Amazon is doing. They have millions of customers and millions of products and thousands of partners. It’s literally a many to many to many relationship. And this is why Amazon is eating everybody alive. They know what products their customers like, they know how to reach those customers with those products, and they make it easy to buy it when you do. This isn’t something that Amazon created over night, but the requirements are no different for the rest of retailers. They need to ramp up the same type of capacity and reach. For example if I sell jewelry I may be selling it on my own company store but I may also have 5 other partnering sites including Amazon. Additionally, I may be using a dozen different advertising methods to drive demand. Now multiply that times the number of jewelry products I sell and you have a massive hairball of complexity. This is what we mean when we say that retailers need to be far more disciplined with their product data. Having a Product Information Management process that spans the onboarding of products all the way through to the digital communication of those products is critical to a retailer staying relevant.
In which businesses do you see the need for more efficient product catalog management and channel convergence?
Arkady Kleyner: There is a huge opportunity out there for the existing Brick & Mortar retailers that embrace an integrated customer experience. Amazon is not the de facto winner. We see a future where the store near you actually IS the online store. But to make that happen, Brick and Mortar retailers need to take a serious step back and treat their product data with the same reverence as they treat the product itself. This means a well-managed process for onboarding, de-duping, and categorizing their product catalog, because all the customer marketing efforts are ultimately an extension of that catalog.
Which performance indicators are important? How can retailers profit from it?
Arkady Kleyner: There are two layers of performance indicators that are important. The first is Operational Intelligence. This is the intelligence that determines what product should be shown to who. This is all based on customer profiling of purchase history. The second is Strategic Intelligence. This type of intelligence is the kind the helps you make overarching decisions on things like
-Maximizing the product margin by analyzing shipping and warehousing options
-Understanding product performance by demographics and regions
-Providing Flash Reports for Sales and Marketing
Which tools are needed to streamline product introduction but also achieve sales numbers?
Arkady Kleyner: Informatica is one of the few vendors that cares about data the same way retailers care about their products. So if you’re a retailer, you really need to treat your product data with the same reverence as your physical products then you need to consider leveraging Informatica as a partner. Their platform for managing product data is designed to encapsulate the entire process of onboarding, de-duping, categorizing, and syndicating product data. Additionally Informatica PIM provides a platform for managing all the digital media assets so Marketing teams are able to focus on the strategy rather than tactics. We’ve also worked with Informatica’s data integration products to bring the performance data from the Point of Sale systems for both Strategic and Tactical uses. On the tactical side we’ve used this to integrate inventories between Web and Brick & Mortar so customers can have an integrated experience. On the strategic side we’ve integrated Warehouse Management Systems with Labor Cost tracking systems to provide a 360 degree view of the product costing including shipping and storage to drive a higher per unit margins.
You can hear more from Arkady in our webinar “The Streamlined SKU: Using Analytics for Quick Product Introductions” on Tuesday, March 4, 2014.
Inspired by the fact I was coming home from a business trip on Valentine’s Day.
Money makes the world go round
In the UK Valentine’s Day ranks behind Halloween, Mother’s Day, Easter and Christmas. British men spend 622m GBP, while women spend 354m. But the average purchase is 119 GBP. Germans for example only spend 59 GBP per person. According to a survey 53 per cent of US women will dump their boyfriends who do not give them anything on this day. China invented the singles day, where 3.5b GBP have been spend in 2013. A lot Americans spend money for pet gifts generating 227m of sales on Valentine’s Day.
All you need is love?
No, all you need is the right product to sell. Retailers use a wide range of an eclectic product to sell around this day, ranging from flowers to insurance and ecigaretts. IKEA Australia made furniture relevant for love with offering a free purchase for every child born nine months from Valentine’s Day.
What GfK and Google research say
In February Google searches showed a peak for recipes and poems. According to GfK, 81 per cent are using coupons when doing the purchase for Valentine.
Where and what to shop
Supermarkets have wrapped up to be the one-stop shop for lovers in a rush. Sainsbury reports a 12 per cent growth in sales of condoms. But did you know the top 8 gift ranking?
1. Cards and eCards
3. Romantic dinners at restaurants
4. Romantic dinners at home (a condom and candles could be the perfect cross-sell to the wine and the recipe – or you plan to get pay-back from IKEA as mentioned above )
6. Jewel leery
8. Weekends away
Sorry, but I will note tell you what I bring home for my wife. But did you know Informatica World offers a retail path this year? Long tail, ecommerce, from retail to me-tail, supply chain optimization and customer centricity and interesting company speakers are on the agenda.
Our blog frequently provides best practice stories of our customers using product information management (PIM) for their business model. This case is about the “long tail strategy” at Kramp.
Tines, hand tools, spare parts for agricultural machines and hydraulic motors are the order of the day at Kramp. Kramp, based in the Netherlands, is Europe’s largest wholesaler of accessories and spare parts for motorized equipment, agricultural and construction machines. The company’s business model and e-commerce strategy is exemplary. Kramp is using product information management (PIM) for their long tail strategy in e-commerce.
Kramp’s Value Proposition: “It’s that easy”
“We want to make it easier for our customers, partners and suppliers. We believe in the future and the power of e-commerce”, said CEO Eddie Perdok. Kramp grew the product assortment from about 200,000 to 1,000,000+ items from about 2,000 suppliers.
Previous stock policies in mail order retail always meant having limited space. In the catalog there were only a certain number of pages available. Even the logistics were limited – warehouse storage limited the possibilities so much that the majority of companies tried to find the “perfect catalog range” with the largest number of bestsellers.
The Digital Assortment Has No Limits
“Compared to other sales channels, the internet gives us significant cost advantages”, says Eddie Perdok. The digital department store consists of servers that can be easily extended at any time.
Adding a new product requires no more than a few additional entries in a database. The challenge is that the product data must be obtained from the suppliers and then distributed before products can be presented in a shop. The range is therefore often limited because the product data cannot be efficiently updated and sale is lost to other vendors are retailers.
Europe’s largest wholesaler of spare parts for agricultural machines and accessories focuses on managing all product data from a central data source for all sales channels and languages.
Customer and supplier feedback is an important factor
“We want to bring customer opinion and supplier knowledge together”, explains Eddie Perdock. “Online customer evaluation combined with the knowledge of the manufacturer puts us in the position of being able to optimally control our stock”. In e-commerce, vendors, retailers and customers are coming closer and closer together.
Benefits Kramp realized with PIM on their long tail strategy
- Quick ROI due to short implementation phases
- Better customer satisfaction due to optimal data quality
- Higher margins and turnover in e-commerce due to niche items in long tail
- Easy, professional handling of mass data lowers process costs
- Short product introduction times to new markets
You can learn more on using PIM for long tail business on the entire case study or hear Ronald Renskers, Manager Product Content at Kramp, and others talking on the latest video.
Massively increasing the assortment is one of the top trends retailers and distributors focus on, according to Forrester Principal Analyst, Sucharita Mulpuru. Forrester’s research shows that retailers’ biggest competition are brands that sell directly to consumers. Marketplaces like Amazon result in higher margins, according to Forrester and www.pim-roi.com.
Maybe the word “death” is a bit strong, so let’s say “demise” instead. Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value. The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.
Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data. The existence of such a high-value role validates product development but it also limits adoption. It is also great news that data has finally gathered the attention it deserves. But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value. What happened to the democratization of software? Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better? Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?
Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios? Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far. If analytical apps have become a self-service proposition for business users, data integration should be as well. So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?
I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium. However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”. By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.
Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm. I cannot wait to see what’s next – and I just saw something recently that got me very excited. Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go. This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics. Don’t you think? Do you believe that this role will disappear as quickly as it has surfaced?
A hundred years from now people will look back at this period of time and refer to it as the Data Dark Ages. A time when the possibilities were endless but due to siloed data fiefdoms and polluted data sets the data science warlords and their minions experienced an insatiable hunger for data and rampant misinformation driving them to the brink of madness. The minions spent endless hours in the dark dungeons preparing data from raw and untreated data sources for their data science overseers. Solutions to the worlds’ most vexing problems were solvable if only the people had abundant access to clean and safe data to drive their analytic engines.
Legend held that a wizard in the land of Informatica possessed the magic of a virtual data machine called Vibe where a legion of data engineers built an intelligent data platform to provide a limitless supply of clean, safe, secure, and reliable data. While many had tried to build their own data platforms only those who acquired the Informatica Intelligent Data Platform powered by Vibe were able to create true value and meaning from all types of data.
As word spread about Informatica Vibe and the Intelligent Data Platform data scientists and analysts sought its magic so they could have greater predictive power over the future. The platform could feed any type of data of any volume into a data lake where Vibe, no matter the underlying technology, prepared and managed the data, and provisioned data to the masses hungry for actionable and reliable information.
An analytics renaissance soon emerged as more organizations adopted the Informatica Intelligent Data Platform where data was freely yet securely shared, integrated and cleansed at will, matched and correlated in real-time. The data prep minions were set free and data scientists were able to spend the majority of their time discovering true value and meaning through big data analytics. The pace of innovation accelerated and humanity enjoyed a new era of peace and prosperity.
As covered by Loraine Lawson, “When it comes to data, the U.S. federal government is a bit of a glutton. Federal agencies manage on average 209 million records, or approximately 8.4 billion records for the entire federal government, according to Steve O’Keeffe, founder of the government IT network site, MeriTalk.”
Check out these stats, in a December 2013 MeriTalk survey of 100 federal records and information management professionals. Among the findings:
- Only 18 percent said their agency had made significant progress toward managing records and email in electronic format, and are ready to report.
- One in five federal records management professionals say they are “completely prepared” to handle the growing volume of government records.
- 92 percent say their agency “has a lot of work to do to meet the direction.”
- 46 percent say they do not believe or are unsure about whether the deadlines are realistic and obtainable.
- Three out of four say the Presidential Directive on Managing Government Records will enable “modern, high-quality records and information management.”
I’ve been working with the US government for years, and I can tell that these facts are pretty accurate. Indeed, the paper glut is killing productivity. Even the way they manage digital data needs a great deal of improvement.
The problem is that the issues are so massive that’s it’s difficult to get your arms around it. Just the DOD alone has hundreds of thousands of databases on-line, and most of them need to exchange data with other systems. Typically this is done using old fashion approaches, including “sneaker-net,” Federal Express, FTP, and creaky batching extracts and updates.
The “digital data diet,” as Loraine calls it, really needs to start with a core understanding of most of the data under management. That task alone will take years, but, at the same time, form an effective data integration strategy that considers the dozens of data integration strategies you likely formed in the past that did not work.
The path to better data management in the government is one where you have to map out a clear path from here to there. Moreover, you need to make sure you define some successes along the way. For example, the simple reduction of manual and paper processes by 5 or 10 percent would be a great start. It’s something that would save the tax payers billions in a short period of time.
Too many times the government gets too ambitious around data integration, and attempts to do too much in too short an amount of time. Repeat this pattern and you’ll find yourself running in quicksand, and really set yourself up for failure.
Data integration is game-changing technology. Indeed, the larger you are, the more game-changing it is. You can’t get much larger than the US government. Time to get to work.
As 2014 is already upon us, here are Marge’s 2014 predictions:
- CMOs will actually be more data-driven than the CIOs: In 2014, CMOs will take over the lead from IT as the organization which most effectively collects, cleanses and leverages data about customers from a wide variety of sources from data bases, to CRM systems, to digital tools to gain a full picture of the customer base.
- Convergence of CIOs and CMOs: As marketers’ technology spend increases, CMOs are gaining more power in the digital space and consequently need to work with the CIO more and more. In 2014 we will see the emergence of a new hybrid role where the CIO and CMO role will merge—the chief digital officer.
- Social media’s equal share: As social media sites come of age, i.e. Twitter’s IPO, the CMOs budget in 2014 will be equally distributed between brand, lead generation and social media. Social media becomes equally important to lead generation and even drives more lead generation through the funnel than traditional marketing tactics.
- Marketing automation: more dollars will be spent for programs versus people as marketers drive to create more automated processes in the coming year. 75 percent of marketing will be automated, while 25 percent will be customer unique.
- Custom content: As more and more marketers are creating their own content to drive sales, the barriers between paid, earned and owned media will break down to one integrated content strategy. Currently, 43 percent have a documented strategy—next year more than 60 percent will.
- Redefining ROI: As new platforms for marketing content arise, the definition of ROI will shift to ROE—“Return on Engagement” with customers, turning content into leads and sales; metrics will shift from quantitative to qualitative. The CMO will deliver a social ROE report weekly to the CEO.
- Internet of Things: According to Forrester, 90 percent of consumers who have multiple connected devices switch between the devices to complete tasks—that’s a lot of machine data about consumers and their products. CMOs will need to spend one-third of their time analyzing data and using predictive analytics to make marketing decisions.
- The quantified self: In 2014, mobile will drive more than 50% of the traffic to organizations homepage. Companies will need to be mobile-first. With data being pulled from numerous devices and platforms, one winner will emerge in BI in marketing to help collate this information.
- Micro-content: Content will continue to get shorter—even after the boom of the six-second Vine video. Next year, try creating a brand message via a three-second video or a Snapchat photo that lasts on your device no longer than 24 hours.
- Collaboration continues to rule the world: next year the emergence of an entirely new set of collaboration tools will burst on the scene that can be leveraged across country and across time zones to make collaboration easier and seamless.
Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money
I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it. Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it. This coincided with a meeting I had with a European telco’s enterprise architecture team. There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).
In my case, I moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.
For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:
- In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
- My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
- My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
- My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
- A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
- A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
- My legacy operator turns on the wrong premium channels on half my set-top boxes.
- The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail). When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.
Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously. And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!
One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years. If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.
However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker. They had a double whammy in store for me. First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence. Then, my new residence’s HOI advised they added my old home’s HOI to my account.
After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.
It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability. It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month. Yes, that also happened to me…many years ago. Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.
We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.
Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP), free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically. If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”. If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.
Tell me about one of your “data-driven” horror scenarios?
Ah, the new year is almost upon us, so here are Todd’s 2014 predictions:
1) Real companies, not just internet advertising based businesses and startups, will stop just talking about big data and actually implement real solutions focused on 2 areas:
- Data warehouse offloading where companies take all of that data that is just sitting in their enterprise data warehouses and move the data they aren’t accessing into Hadoop where they will now preprocess the data in Hadoop and put that output into the EDW where they then report on it. The big driver for this? Cost savings.
- Predictive analytics based on collecting, integrating, cleansing and analyzing large amounts of sensor data. The data has always been available, it is just that analyzing it en masse has always been problematic. The big driver for this? Operational efficiency of the systems that are being monitored as well as feedback analysis to build the next generation of efficient systems.
That said, even more companies will still be talking about big data than are actually doing it. There is still a long learning curve.
2) Organizations will starting thinking about how to transition their data management infrastructure from a cost center into a profit center. As more companies identify ways they can take existing data about their customers, products, suppliers, partners etc., they will start identifying ways they can generate new revenue streams by repackaging this data into information products.
3) Data quality will continue to stink. This one is a sure thing. It never ceases to amaze me how people think that either their data isn’t bad, or that they can’t do anything about it or that thanks to big data, they don’t have to worry about data quality because of the law of large numbers. Laugh if you want at that last one on the law of large numbers, but I have heard that story at least three different times this year. Just for clarification for those of you who think that more data means that the dirty data becomes a statistical anomaly, it only means that you have the same percentage of dirty data as before…. You just have more of it.
4) More about big data… There still won’t be enough people who understand Hadoop. There will be lots of vendors (including Informatica) creating cool new tools so average developers and even business users can integrate, cleanse and analyze data on Hadoop without having to know anything about Hadoop. The hype might die down, but this will actually be an even more exciting area.
5) More business self service will come to the data integration space. With the proliferation of data, it is impossible for IT to service all of the integration needs of the business. So the only solution will be the growth of self-service integration capabilities that let business users and shadow IT do integration on their own. While this has existed already, the big change will be that corporate IT will start offering these services to their internal customers so departments can do their own integration but within a framework that is managed and supported by corporate IT. It is the very beginning of this trend, but expect to see IT start to get more control by giving control to their business users.
6) The Pittsburgh Steelers will make it back into the playoffs….. and for a 2015 prediction, my beloved Steelers will however not make it to the Super Bowl, but my adopted home team, the Santa Clara J 49ers, will make it to the Super Bowl and will win.
Happy New Year everyone.