Category Archives: Master Data Management
That tag line got your attention – did it not? Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it. I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application. These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed. And now they bought him a smart phone to fix it. So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.
Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility. Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.
My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.
The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations). On the utility side, the go to area for opportunity appeared to be maintenance operations. Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.
After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).
The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts. The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.
All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information. They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.
While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.
So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.
Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).
Marketing (Trading) was surprisingly open about their issues. They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.). As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.
Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).
A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.
One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types). Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.
When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.
Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
I had a disturbing conversation at Dreamforce. Long story short, thousands of highly skilled and highly paid financial advisors (read sales reps) at a large financial services company are spending most of their day pulling together information about their clients in a spreadsheet, leaving only a few hours to engage with clients and generate revenue.
Not all valuable customer information is in Salesforce
Why? They don’t have a 360-degree customer view within Salesforce.
Why not? Not all client information that’s valuable to the financial advisors is in Salesforce. Important client information is in other applications too, such as:
- Marketing automation application
- Customer support application
- Account management applications
- Finance applications
- Business intelligence applications
Are you in sales? Do you work for a company that has multiple products or lines of business? Then you can probably relate. In my 15 years of experience working with sales, I’ve found this to be a harsh reality. You have to manually pull together customer information, which is a time-consuming process that doesn’t boost job satisfaction.
Stop building 360-degree customer views in spreadsheets
So what can you do about it? Stop building 360-degree customer views in spreadsheets. There is a better way and your sales operations leader can help.
One of my favorite customer success stories is about one of the world’s leading wealth management companies, with 16,000 financial advisors globally. Like most companies, their goal is to increase revenue by understanding their customers’ needs and making relevant cross-sell and up-sell offers.
But, the financial advisors needed an up-to-date view of the “total customer relationship” with the bank before they talked to their high net-worth clients. They wanted to appear knowledgeable and offer a product the client might actually want.
Can you guess what was holding them back? The bank operated in an account-centric world. Each line of business had its own account management application. To get a 360-degree customer view, the financial advisors spent 70% of their time pulling important client information from different applications into spreadsheets. Sound familiar?
Once the head of sales realized this, he decided to invest in information management technology that provides clean, consistent and connected customer information and delivers a 360-degree customer view within Salesforce.
The result? They’ve had a $50 million dollar impact annually and a 30% increase in productivity. In fact, word spread to other banks and the 360-degree customer view in Salesforce became an incentive to attract top talent in the industry.
Ask sales operations to give you 360-degree customer views within Salesforce
I urge you to take action. In particular, talk to your sales operations leader if he or she is at all interested in improving performance and productivity, acquiring and retaining top sales talent, and cutting costs.
Want to see how you can get 360-degree customer views in Salesforce? Check out this demo: Enrich Customer Data in Your CRM Application with MDM. Then schedule a meeting with your sales operations leader.
Have a similar experience to share? Please share it in the comments below.
I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO. Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts. Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts. Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data. They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.
If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how. Depending on the business model you may even have to figure out who built it or owns it. All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized. And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.
Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets. Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function. The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement. The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything. Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….
Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.
I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple. None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for. It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system. After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old. Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.
So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child? Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age. This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit. Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.
If you are aware of business models and their difficulties to leverage data, write to me. If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts. Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.
The Physician Payments Sunshine Act shines a spotlight on the disorganized state of physician information, which is scattered across systems, often incomplete, inaccurate and inconsistent in most pharmaceutical and medical device manufacturing companies.
According to the recent Wall Street Journal article Doctors Face New Scrutiny over Gifts, “Drug companies collectively pay hundreds of millions of dollars in fees and gifts to doctors every year. In 2012, Pfizer Inc., the biggest drug maker by sales, paid $173.2 million to U.S. health-care professionals.”
The Risks of Creating Reports with Inaccurate Physician Information
There are serious risks of filing inaccurate reports. Just imagine dealing with:
- An angry call from a physician who received a $25 meal, which was inaccurately reported as $250 or who reportedly, received a gift that actually went to someone with a similar name.
- Hefty fines and increased scrutiny from the Centers for Medicare and Medicaid Services (CMS). Fines range from $1,000 to $10,000 for each transaction with a maximum penalty of maximum $1.15 million.
- Negative media attention. Reports will be available for anyone to access on a publicly accessible website.
How prepared are manufacturers to track and report physician payment information?
One of the major obstacles is getting a complete picture of the total payments made to one physician. Manufacturers need to know if Dr. Sriram Mennon and Dr. Sri Menon are one and the same.
On top of that, they need to understand the complicated connections between Dr. Sriram Menon, sales representatives’ expense report spreadsheets (T&E), marketing and R&D expenses, event data, and accounts payable data.
3 Steps to Ensure Physician Information is Accurate
In recent years, some pharmaceutical manufacturers and medical device manufacturers were required to respond to “Sunshine Act” type laws in states like California and Massachusetts. To simplify, automate and ensure physician payment reports are filed correctly and on time, they use an Aggregate Spend Repository or Physician Spend Management solution.
They also use these solutions to proactively track and review physician payments on a regular basis to ensure mandated thresholds are met before reports are due. Aggregate Spend Repository and Physician Spend Management solutions rely on a foundation of data integration, data quality, and master data management (MDM) software to better manage physician information.
For those manufacturers who want to avoid the risk of losing valuable physician relationships, paying hefty fines, and receiving scrutiny from CMS and negative media attention, here are three steps to ensure accurate physician information:
- Bring all your scattered physician information, including identifiers, addresses and specialties into a central place to fix incorrect, missing or inconsistent information and uniquely identify each physician.
- Identify connections between physicians and the hospitals and clinics where they work to help aggregate accurate payment information for each physician.
- Standardize transaction information so it’s easy to identify the purpose of payments and related products and link transaction information to physician information.
Physicians Will Review Reports for Accuracy in January 2014
In January 2014, after physicians review the federally mandated financial disclosures, they may question the accuracy of reported payments. Within two months manufacturers will need to fix any discrepancies and file their Sunshine Act reports, which will become part of a permanent archive. Time is precious for those companies who haven’t built an Aggregate Spend Repository or Physician Spend Management solution to drive their Sunshine Act compliance reports.
If you work for one of the pharmaceutical or medical device manufacturing companies already using an Aggregate Spend Repository or Physician Spend Management solution, please share your tips and tricks with others who are behind.
Tick tock, tick tock….
If the recent MDM and Data Governance Summit was any indication, Master Data Management is an extremely hot topic these days. The summit was highly successful, drawing over 400 attendees comprised of business users and architects of every stripe.
I want to highlight one presentation that spoke to me directly. Quintiles is a company you may remember if you went to Informatica World 2013. Quintiles provides biopharmaceutical development and commercial outsourcing services via a vast network of over 27,000 employees across the globe. At the summit, John Poonnen, Quintiles’ director of product engineering, told of the company’s journey to multidomain MDM, which was key to enabling a web-based platform for delivering real-time insights into patient, study, site, and program activities. Poonnen presented to an audience of over a hundred technology and business professionals. (more…)
Just in time for Halloween, I’m sharing a scary story. Warning: this is a true story. You may wonder:
- Could this happen to me?
- Can this situation be avoided?
- How can I prevent this from happening to me?
Last summer, the worst wildfire in Colorado history burned hundreds of acres, 360 homes, killing two people and forcing 38,000 people to evacuate the area.
Unfortunately, it was during the Colorado wildfire that a large integrated healthcare provider with hospitals, doctors, healthcare providers and employees located throughout the United States (who shall remain nameless) realized they had a problem. They couldn’t respond in real time to the disaster by mobilizing their workforce quickly. They struggled to identify, contact and communicate with doctors, healthcare providers and employees located at the disaster area to warn them not to go to the hospital or redirect them to alternative sites where they could help.
This healthcare provider’s inability to respond to this disaster in real time was an “Aha” moment. What was holding them back was a major information problem. Because their employee information was scattered across hundreds of systems, they couldn’t pull a single, comprehensive and accurate list of doctors, healthcare providers and employees in the disaster area. They didn’t know which employees needed to be evacuated or which could be sent to assist people in other locations. So, they had to email everyone in the company.
The good news is that we’re in the process of helping them create and maintain a central location called an “employee master” built on our data integration, data quality, and master data management (MDM) software. This will be their “go-to” place for an up-to-date, complete and accurate list of employees and their contact information, such as work email, phone, pager (doctors still use them), home phone and personal email as well as their location, so they know exactly who is working where and how best to contact them.
This healthcare provider will no longer be held back by an information problem. In three months, they’ll be able to respond to disasters in real time by mobilizing their workforce quickly.
An interesting side note: Immediately before our Informatica team of experts arrived to talk to this healthcare provider about how we can help them, there was a power outage in the building. They struggled to alert the employees who were impacted. So our team personally experienced the pain of this organization’s employee information problem.
When disaster strikes, will you be ready to respond in real time? Or do you have an information problem that could hold you back from mobilizing your own employees?
I want your opinion. Are you interested in more scary stories? Let me know in the comments below. I’m thinking about making this a regular series.
How often are you getting emails with “your personal product recommendations”? How personalized and correct are they really? How relevant is your omnichannel information to your customers? Commerce Relevancy is the next wave putting Omnichannel Commerce on another level.
As Information gets more democratic, connecting the dots between master data of customers, supplier, location and products ring the bells for the next commerce wave after omnichannel commerce: It is Commerce Relevancy: In the first phase it is focusing and combining product and customer data to deliver consistency and relevancy in omnichannel retailing. That will have deep inpact in customer experience.
The evolution of the retail environment, driven by advances in technology, has taken us from single channel or siloed channels to omnichannel retailing. Customers expect to find product information and make purchases when it is most convenient to them. As a result, delivering the right product and brand information at every customer touch point has never been more important.
Consistent and accurate product information has a profound impact on the buying decision, but in this hyper-connected world where competitive information is at the customer’s fingertips, retailers and CPG manufacturers also need to ensure that the information provided is highly relevant in order to differentiate and stay ahead.
The market has passed the eras of siloed channels to multichannel commerce (serving all channels, but not allways connected, then connecting the channels, called cross channel commerce. The last wave was establishing the term of omnichannel commerce, due to Wikipedia described as “very similar to the evolution of, (Multi-channel retailing), but is concentrated more on a seamless approach to the consumer experience through all available shopping channels, i.e. mobile internet devices, computers, bricks-and-mortar, television, radio, direct mail, catalog and so on.”
The new generation of Commerce Relevancy requires the right information and data which help turning data into a competitive advantage by helping organizations present product information that is complete, accurate and easy to understand. In today’s retail environment data consistency is key, but it is no longer enough. Every retailer wants to be a success full online retailer and every manufacturer wants to become a retailer with several own direct sales channels.
The next wave taking omnichannel commerce to the next level will address information relevancy at every channel and all customer interactions – called Commerce Relevancy.
In order to enable Commerce Relevancy, companies are now asking themsleves how to connect the dots between supplier, location, customer and product information. In this business use cases customer profiles or target group personas get match with product inforomation in sales and marketing. Think of the same TV flatscreen sold to two completely different personas, but using other parts of the description to tailor it to the customer. Using other channels to promote it. Using other images and vidoes which match with the customer profile. Commerce Relevancy means connecting the dots between all master data and product information.
4 Elements of Commerce Relevancy
Relevance commerce is taking omnichannel commerce to the next level now. Commerce Relevancy is influenced by four main characters:
- Relevant product recommendations everywhere: what is the next logical buy? This will not only be leveraged in ecommerce but at any customer touch point.
- Relevant locations: the ways of a product from an own warehouse, from a supplier warehouse and brick and mortar become more flexible. Customers are being served always from the location nearby.
- Relevant marketing: personalization will reach a new level combining product, location, supplier and first of all customer information the right way and across all channels. Relations of and between data (like customer profiles and product bundles) build the core of customer experience, when delivered in real-time.
- Relevant analytics: As customers expect real time information and services new technologies come up to use eye mark recording or image recognition of customer faces when customers returning into a store. Measuring the heart frequency to understand customers’ emotions while shopping opens new possibilities, when connecting the data to unleash information potential.
How Do You Target Your Customers at All Channel Touch Points to Individualize Their Shopping Experience? 5 Ways to Personalize Your Product Recommendations
Traditional in-store merchandisers frequently engage in new and refreshing ways to improve the shopping experience for their customers. With thousands of square feet of space, accessorized mannequins, and attractive kiosks and showcase displays, they’re afforded numerous luxuries in creating environments conducive to driving sales.
Since online retailers lack these merchandising luxuries, they leverage entirely different tools to enhance the customer experience, using key components of web-based shopping, such as search, navigation, and product recommendations. In the world of e-commerce, we’ve moved from merely selling to customers to empowering them. As a result, e-commerce directors and online merchandisers need to optimize the process by delivering what their customers want.
And what do customers want? Apparently, they want it all: the price and ease of point-and-click purchasing, the experience of in-store visits, the convenience of home delivery, and the service of boutique shopping. They don’t think about product exposure. They don’t focus on the mechanics of the shopping process. And they don’t care about an e-tailer’s internal complexity. They want simply to buy what they need in a way that’s suited to them.
At the same time, because customers’ needs aren’t static, their online shopping experiences shouldn’t be either. They increasingly expect a personal and highly relevant interaction with the retail websites they visit. Failing to get that, they’ll often go elsewhere. For online retailers and brand manufacturers of consumer packed goods (CPG) this creates the challenge of customer retention.
Here are five hints to maximize the power of PIM for tailored product recommendations
To meet the challenge of customer retention demands leveraging product information management (PIM) to target your customers. This means customizing on-line shopping by boosting the relevance of product recommendations. To make the most of PIM, there are five things you should know:
1. To meet the expectations of your online customers, strive to understand them better by using data that’s qualitative and quantitative, historic and current; in addition, use data that provides a context critical to taking relevant actions.
2. To ensure that the experiences of your online customers aren’t static or boilerplate, make all key elements in the merchandising toolset intuitive and dynamic but, above all, tailored to the individual customer.
3. To leverage relevant content for other sales channels, consider the long-tail strategy and enhance the assortment. If customers dial in to the hotline, inside sales can leverage product search and automatic recommendations for intelligent cross- and up-selling within seconds. The hotline connects customer profiles and product information as well as availability on stock in your own warehouse or in the suppliers’ warehouse.
4. To present the best option to each customer, automate personalization of promotions and targeting. This means checking that every promotional banner that is presented has been optimized; banners that convert poorly must be automatically demoted and replaced by others that perform well. It’s all about automatic testing: Which campaign will convert best and which banner within that campaign—the blue, the red, or the one with a big arrow on it? Adopting an integrated approach ensures that a campaign will not be presented more than once on the same page. By using advanced techniques to understand which promotions appeal to each customer—and, more importantly, which don’t—the solution adapts in real time to present the most appealing banners in the context of each customer’s journey.
5. To ensure a cohesive customer experience, unify the many different information elements—filters, banners, promotions, product recommendations, and editorial content. Seen from the user’s perspective, these elements should all be parts of the same picture, presented in a coherent context. The user expects content shown in all of these panels to be orchestrated and related to what is relevant to him or her right now (see Figure 1).
Information Needs to Be Relevant: Targeting Customers with Product Recommendations Is Valuing Customers Is Keeping Customers
Using the behavioral data generated by product information and customer data (see the entire whitepaper), companies can:
- Monitor product exposure for different customer segments
- Gather information on how your visitors name, find, and filter products
- Learn which requests, products, and categories best boost the conversion of your channel
- Follow up customer behavior online after execution of print campaigns
- Learn which changes in product data have the biggest impact on conversion
With cutting-edge product information management, you can guide and inspire your customers with instant, highly relevant content like real product recommendations. Doing so makes all the difference in boosting the quality of their shopping experience and, ultimately, their loyalty to your site.
One last thing: Have you already thought about tailoring other sales and marketing channels beyond e-commerce and e-mail? What is the next logical buy at a call center or at an on-site store?
Do you know how good your multichannel data is? This blog covers four business objectives when accelerating multi channel commerce and which quality of product data is needed to deliver to that and a summary of questions to ask when establishing your strategy. These questions help ecommerce managers, category managers and marketers at retailers, distributors and brand manufacturers ask the right questions on product and customer data when establishing a multi channel strategy.
The Multichannel Challenge: Availability of Relevant Information
At every customer touch point, the ready availability of product information has a profound effect on buying decisions. If your customers can’t find what they’re shopping for, don’t understand how well your product meets their needs, or aren’t confident in their choice, they won’t complete their purchase.
When customers are researching or actively online shopping for products, research says 40 is the magic number:
40 % of buyers intend to return their purchase at the time they order it.
40 % order multiple versions of a product.
40 % of all fashion product returns are the result of poor product information (Consumer electronics are 15,3%; Sources: Trusted Shops, 2012, Internet World Business 7.1.2013)
All the high-quality product data in the world is useless if an organization cannot leverage that data for quicker time to market, improved e-commerce performance, and greater customer satisfaction.
Four Business Objectives When Accelerating Multi Channel Commerce
This white paper comes with four common use cases that illustrate typical business objectives within a multichannel commerce strategy. When looking into your product information, here is a list of questions you might consider.
1. Increasing conversions and lowering return rates by ensuring that customers can access product information in an easy-to-consume form.
- Where is the flawed content coming from?
- What tools and incentives can we provide for suppliers to maintain the high quality content?
- Which data quality processes should be automated first?
- Do we need a bespoke data model to fit your requirements?
- Can we effectively use industry standards for communicating with suppliers (such as GS1 or eClass)?
2. Lowering manual processing costs by merging the best product content from multiple suppliers.
- How many product catalogs do we have and what are the processes that slow us down?
- Who is responsible for the quality of the product information?
- How can we define and enforce the objective and measurable policies?
- Which supplier has best descriptions / certain translation, high-quality images / video / etc.?
- How do we collaborate with our large and small suppliers to achieve best data quality?
3. Growing margins through “long tail” merchandising of a broader assortment of products.
- Can we automate product classification?
- Which taxonomy will work best for us?
- Do all stakeholders have visibility of data quality metrics and trends?
- How can we leverage information across all channels and customer touch points, not only ecommerce?
4. Increasing customer satisfaction through more consistent information and corporate identity across sales channels.
- How should we connect customer and product information to provide personalized marketing?
- How can we leverage supplier and location data for regional marketing?
- How do we enable crowd sourcing of comments, reviews and user images?
- What information do internal and external users need to access in real time?
Find more information with the complete white paper on multichannel commerce and data quality.