Category Archives: Manufacturing

How Much is Poorly Managed Supplier Information Costing Your Business?

Supplier Information“Inaccurate, inconsistent and disconnected supplier information prohibits us from doing accurate supplier spend analysis, leveraging discounts, comparing and choosing the best prices, and enforcing corporate standards.”

This is quotation from a manufacturing company executive. It illustrates the negative impact that poorly managed supplier information can have on a company’s ability to cut costs and achieve revenue targets.

Many supply chain and procurement teams at large companies struggle to see the total relationship they have with suppliers across product lines, business units and regions. Why? Supplier information is scattered across dozens or hundreds of Enterprise Resource Planning (ERP) and Accounts Payable (AP) applications. Too much valuable time is spent manually reconciling inaccurate, inconsistent and disconnected supplier information in an effort to see the big picture. All this manual effort results in back office administrative costs that are higher than they should be.

Do these quotations from supply chain leaders and their teams sound familiar?

  • “We have 500,000 suppliers. 15-20% of our supplier records are duplicates. 5% are inaccurate.”
  • I get 100 e-mails a day questioning which supplier to use.”
  • “To consolidate vendor reporting for a single supplier between divisions is really just a guess.”
  • “Every year 1099 tax mailings get returned to us because of invalid addresses, and we play a lot of Schedule B fines to the IRS.”
  • “Two years ago we spent a significant amount of time and money cleansing supplier data. Now we are back where we started.”
Webinar, Supercharge Your Supply Chain Apps with Better Supplier Information

Join us for a Webinar to find out how to supercharge your supply chain applications with clean, consistent and connected supplier information

Please join me and Naveen Sharma, Director of the Master Data Management (MDM) Practice at Cognizant for a Webinar, Supercharge Your Supply Chain Applications with Better Supplier Information, on Tuesday, July 29th at 10 am PT.

During the Webinar, we’ll explain how better managing supplier information can help you achieve the following goals:

  1. Accelerate supplier onboarding
  2. Mitiate the risk of supply disruption
  3. Better manage supplier performance
  4. Streamline billing and payment processes
  5. Improve supplier relationship management and collaboration
  6. Make it easier to evaluate non-compliance with Service Level Agreements (SLAs)
  7. Decrease costs by negotiating favorable payment terms and SLAs

I hope you can join us for this upcoming Webinar!

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Quality, Manufacturing, Master Data Management | Tagged , , , , , , , , , , , , , , | Leave a comment

Margin Killer: How a pair of pants plummeted profits

Recently, I ordered a pair of athletic pants from a high-fashion, online retailer. The pants were a well-known brand and cost $96.00. The package arrived within a few days. However, when I opened the box, I found it did not contain the product I expected. The brand and color were correct, but it was not the style I’d chosen. Disappointed, I wrote the retailer, explaining the issue and requesting the correct product. Then, I returned the incorrect product.

According to recent research, the average vendor’s “cost per return” is $20.00.  That means that my return was a Margin Killer for the retailer.

product-return

Product returns kill margins.

Three days later, the replacement delivery arrived. Whoop there it is… Disappointment number two. It was the exact same incorrect product. Yet another Margin Killer, Return Number 2. Another $20.00 in costs for the retailer. What would it take for this retailer’s logistic team to avoid repeating their error? Could they scan the product? Could they use a QR code, a bar-code or some sort of picture?

I returned the incorrect product for the second time. Eventually, shipment number three reached my home. Can you guess what was in the box? Yes, the same incorrect product, again, for the third time. The Margin Killer: Return Number 3. For this retailer, the math is simple:

Return 1: $20.00
Return 2: $20.00
Return 3: $20.00
Total return cost: $60.00
Revenue = Possibly zero?

Funky side note: When browsing stores downtown on Saturday, I found the correct pants in a SportScheck store, and for ten dollars less! So remember, the modern customer is demanding, always-connected and shopping on an “Informed Purchase Journey”.

So how can I learn more?
If you work in retail technology, you will find rich information about this purchase journey at the Informatica World 2014 conference. The Retail Path track will feature insights from companies like Nike, Avent, Discount Tire, Nordstrom, Geiger, Intricity and Deloitte. Experts will share ways to leverage your data to boost your sales and heighten customer experience. The conference even has a dedicated MDM Day on Monday May 12 with workshops and sessions showing how vendors, distributors, retailers and individuals interact in the “always-on” connected world. Make sure you have a spot by signing up HERE.

FacebookTwitterLinkedInEmailPrintShare
Posted in Manufacturing, Master Data Management, PiM, Product Information Management, Retail | Tagged , , , , | 3 Comments

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

Merchandizers, The Compromise Effect Has Gone – User Reviews Overcome Marketing Manipulation

I was recently boarding a flight in New York and started reading the New York Times. One article jumped out: “User reviews make it harder for marketers to manipulate.” A Stanford University research report proves a wealth of product information and user reviews is causing a fundamental shift in how consumers make decisions.

 Consumers rely more on one another

The latest research from Dr. Simonson and Emanual Rosen is based on an experiment performed decades ago at Duke University. In the experiment participants had to choose from a group of either two or three cameras. The research found that consumers chose the cheaper product when being offered two options, but when given three choices, most went with the middle one. It was called the “compromise effect,” which has been used by marketers to impact buying decisions.

But an updated version of the experiment allowed participants to read product ratings and reviews before choosing one of the three cameras. While a portion of the participants always choose the lowest-priced product, in this new scenario more participants are selecting the most expensive product over the middle-priced product based on customer reviews.

“The compromise effect is gone,” says Dr. Simonson in this New York Times article. The Book “Absolute Value” comes with a more in depth explanation: (http://www.absolutevaluebook.com/).

 product comparison

Imagine if you could own and control both customer opinion and product information? The next wave taking omnichannel commerce to the next level will address information relevancy at every channel and all customer interactions – called Commerce Relevancy.

FacebookTwitterLinkedInEmailPrintShare
Posted in Manufacturing, Master Data Management, PiM, Product Information Management, Retail, Uncategorized | Tagged , , , , , | Leave a comment

Sensational Find – $200 Million Hidden in a Teenager’s Bedroom!

That tag line got your attention – did it not?  Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it.  I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application.  These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed.  And now they bought him a smart phone to fix it.  So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.

Finding the asset in your teenager's mess

Finding the asset in your teenager’s mess

Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility.  Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.

My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.

The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations).  On the utility side, the go to area for opportunity appeared to be maintenance operations.  Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.

After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).

The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts.  The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.

All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information.  They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.

While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.

So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.

Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).

Marketing (Trading) was surprisingly open about their issues.  They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.).  As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.

Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).

A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.

One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types).  Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.

When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.

Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.

Disclaimer:
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Data Aggregation, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , , , , | Leave a comment

Squeezing the Value out of the Old Annoying Orange

I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO.  Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts.  Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts.  Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data.  They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.

Squeeze that Orange

Squeeze that Orange

If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how.  Depending on the business model you may even have to figure out who built it or owns it.  All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized.  And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.

Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets.  Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function.  The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement.  The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything.  Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….

Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.

I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple.  None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for.  It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system.  After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old.  Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.

So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child?  Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age.  This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit.  Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.

If you are aware of business models and their difficulties to leverage data, write to me.  If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts.  Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customers, Data Governance, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Life Sciences, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Product Information Management, Profiling, Telecommunications, Transportation, Utilities & Energy, Vertical | 1 Comment

Sunshine Act Spotlights Requirement for Accurate Physician Information

The Physician Payments Sunshine Act shines a spotlight on the disorganized state of physician information, which is scattered across systems, often incomplete, inaccurate and inconsistent in most pharmaceutical and medical device manufacturing companies.

According to the recent Wall Street Journal article Doctors Face New Scrutiny over Gifts, “Drug companies collectively pay hundreds of millions of dollars in fees and gifts to doctors every year. In 2012, Pfizer Inc., the biggest drug maker by sales, paid $173.2 million to U.S. health-care professionals.”

The Risks of Creating Reports with Inaccurate Physician Information

TheSunshineAct

Failure to comply with the federal Sunshine Act opens companies up to damaging relationships with physicians they’ve spent years cultivating.

There are serious risks of filing inaccurate reports. Just imagine dealing with:

  • An angry call from a physician who received a $25 meal, which was inaccurately reported as $250 or who reportedly, received a gift that actually went to someone with a similar name.
  • Hefty fines and increased scrutiny from the Centers for Medicare and Medicaid Services (CMS). Fines range from $1,000 to $10,000 for each transaction with a maximum penalty of maximum $1.15 million.
  • Negative media attention. Reports will be available for anyone to access on a publicly accessible website.

How prepared are manufacturers to track and report physician payment information?

One of the major obstacles is getting a complete picture of the total payments made to one physician. Manufacturers need to know if Dr. Sriram Mennon and Dr. Sri Menon are one and the same.

On top of that, they need to understand the complicated connections between Dr. Sriram Menon, sales representatives’ expense report spreadsheets (T&E), marketing and R&D expenses, event data, and accounts payable data.

3 Steps to Ensure Physician Information is Accurate

In recent years, some pharmaceutical manufacturers and medical device manufacturers were required to respond to “Sunshine Act” type laws in states like California and Massachusetts. To simplify, automate and ensure physician payment reports are filed correctly and on time, they use an Aggregate Spend Repository or Physician Spend Management solution.

They also use these solutions to proactively track and review physician payments on a regular basis to ensure mandated thresholds are met before reports are due. Aggregate Spend Repository and Physician Spend Management solutions rely on a foundation of  data integration, data quality, and master data management (MDM) software to better manage physician information.

For those manufacturers who want to avoid the risk of losing valuable physician relationships, paying hefty fines, and receiving scrutiny from CMS and negative media attention, here are three steps to ensure accurate physician information:

  1. Bring all your scattered physician information, including identifiers, addresses and specialties into a central place to fix incorrect, missing or inconsistent information and uniquely identify each physician.
  2. Identify connections between physicians and the hospitals and clinics where they work to help aggregate accurate payment information for each physician.
  3. Standardize transaction information so it’s easy to identify the purpose of payments and related products and link transaction information to physician information.

Physicians Will Review Reports for Accuracy in January 2014

In January 2014, after physicians review the federally mandated financial disclosures, they may question the accuracy of reported payments. Within two months manufacturers will need to fix any discrepancies and file their Sunshine Act reports, which will become part of  a permanent archive. Time is precious for those companies who haven’t built an Aggregate Spend Repository or Physician Spend Management solution to drive their Sunshine Act compliance reports.

If you work for one of the pharmaceutical or medical device manufacturing companies already using an Aggregate Spend Repository or Physician Spend Management solution, please share your tips and tricks with others who are behind.

Tick tock, tick tock….

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Life Sciences, Manufacturing, Master Data Management | Tagged , , , , , , , , , , | 1 Comment

Multichannel Data: Questions To Ask When Establishing Your Strategy

Do you know how good your multichannel data is? This blog covers four business objectives when accelerating multi channel commerce and which quality of product data is needed to deliver to that and a summary of questions to ask when establishing your strategy. These questions help ecommerce managers, category managers and marketers at retailers, distributors and brand manufacturers ask the right questions on product and customer data when establishing a multi channel strategy.

The Multichannel Challenge: Availability of Relevant Information

At every customer touch point, the ready availability of product information has a profound effect on buying decisions. If your customers can’t find what they’re shopping for, don’t understand how well your product meets their needs, or aren’t confident in their choice, they won’t complete their purchase.

When customers are researching or actively online shopping for products, research says 40 is the magic number:

  • 40 % of buyers intend to return their purchase at the time they order it.
  • 40  % order multiple versions of a product.
  • 40  % of all fashion product returns are the result of poor product information (Consumer electronics are 15,3%; Sources: Trusted Shops, 2012, Internet World Business 7.1.2013)

All the high-quality product data in the world is useless if an organization cannot leverage that data for quicker time to market, improved e-commerce performance, and greater customer satisfaction.

Wherewouldyoubuy_multichannel_PIM_Informatica_Blog

 

Four Business Objectives When Accelerating Multi Channel Commerce

This white paper comes with four common use cases that illustrate typical business objectives within a multichannel commerce strategy. When looking into your product information, here is a list of questions you might consider.

1. Increasing conversions and lowering return rates by ensuring that customers can access product information in an easy-to-consume form.

  • Where is the flawed content coming from?
  • What tools and incentives can we provide for suppliers to maintain the high quality content?
  • Which data quality processes should be automated first?
  • Do we need a bespoke data model to fit your requirements?
  • Can we effectively use industry standards for communicating with suppliers (such as GS1 or eClass)?

2. Lowering manual processing costs by merging the best product content from multiple suppliers.

  • How many product catalogs do we have and what are the processes that slow us down?
  • Who is responsible for the quality of the product information?
  • How can we define and enforce the objective and measurable policies?
  • Which supplier has best descriptions / certain translation, high-quality images / video / etc.?
  • How do we collaborate with our large and small suppliers to achieve best data quality?

3. Growing margins through “long tail” merchandising of a broader assortment of products.

  • Can we automate product classification?
  • Which taxonomy will work best for us?
  • Do all stakeholders have visibility of data quality metrics and trends?
  • How can we leverage information across all channels and customer touch points, not only ecommerce?

4. Increasing customer satisfaction through more consistent information and corporate identity across sales channels.

  • How should we connect customer and product information to provide personalized marketing?
  • How can we leverage supplier and location data for regional marketing?
  • How do we enable crowd sourcing of comments, reviews and user images?
  • What information do internal and external users need to access in real time?

Find more information with the complete white paper on multichannel commerce and data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Manufacturing, Master Data Management, PiM, Product Information Management, Real-Time, Retail, Uncategorized | Tagged , , , , , , | Leave a comment

Logitech MDM Case Study: Live Questions, Answers and Attendee Poll Results (Part II of II)

Last week, I posted this blog:  Logitech MDM Case Study: Seven Lessons for Mastering Product and Customer Data (Part I of II) which shares highlights from recent webinar. Logitech’s Severin Stoll, Senior Business Engagement Manager of Global IT Solutions spoke with David Decloux, MDM technical lead in EMEA about Logitech’s Global MDM implementation, in which they are mastering product, customer and consumer data.

Logitech’s Severin Stoll and Informatica's David Decloux answer questions from webinar attendees about building an MDM business case, implementation time, data governance, and real-time MDM.

In this blog, I’ll share some of the highlights of the Q&A I led and results from two polls. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Customers, Data Governance, Manufacturing, Master Data Management, Retail | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment