Category Archives: B2B Data Exchange
The Informed Purchase Journey
The way we shop has changed. It’s hard to keep up with customer demands in a single channel, much less many. Selling products today has changed and always will. The video below shows how today’s customer takes The Informed Purchase Journey:
“Customers expect a seamless experience that makes it easy for them to engage at every touchpoint on their “decision journey. Informatica PIM is key component on transformation from a product centric view to a consumer experience driven marketing with more efficiency.” – Heather Hanson – Global Head of Marketing Technology at Electrolux
Selling products today is:
- Shopper-controlled. It’s never been easier for consumers to compare products and prices. This has eroded old customer loyalty and means you have to earn every sale.
- Global. If you’re selling your products in different regions, you’re facing complex localization and supply chain coordination.
- Fast. Product lifecycles are short. Time-to-market is critical (and gets tougher the more channels you’re selling through).
- SKU-heavy. Endless-aisle assortments are great for margins. That’s a huge opportunity, but product data overload due to the large number of SKUs and their attributes adds up to a huge admin burden.
- Data driven. Product data alone is more than a handful to deal with. But you also need to know as much about your customers as you know about your products. And the explosion of channels and touch points doesn’t make it any easier to connect the dots.
Conversion Power – From Deal Breaker To Deal Maker
For years, a customer’s purchase journey was something of “An Unexpected Journey.” Lack of insight into the journey was a struggle for retailers and brands. The journey is fraught with more questions about product than ever before, even for fast moving consumer goods.
Today, the consumer behaviors and the role of product information have changed since the advent of substantial bandwidths and social buying. To do so, lets examine the way shoppers buy today.
- Due to Google shoppers use 10.4 sources in average (zero moment of truth ZMOT google research)
- 133% higher conversion rate shown by mobile shoppers who view customer content like reviews.
- Digital devices’ influence 50% of in-store purchase behavior by end of 2014 (Deloitte’s Digital Divide)
How Informatica PIM 7.1 turns information from deal breaker to deal maker
PIM 7.1 comes with new data quality dashboards, helping users like category managers, marketing texters, managers or ecommerce specialists to do the right things. The quality dashboards point users to the things they have to do next in order to get the data right, out and ready for sales.
Eliminate Shelf Lag: The Early Product Closes the Sale
For vendors, this effectively means time-to-market: the availability of a product plus the time it takes to collect all relevant product information so you can display it to the customer (product introduction time).
The biggest threat is not the competition – it’s your own time-consuming, internal processes. We call this Shelf Lag, and it’s a big inhibitor of retailer profits. Here’s why:
- You can’t sell what you can’t display.
- Be ready to spin up new channels
- Watch your margins.
How Informatica PIM 7.1 speeds up product introduction and customer experience
“By 2017… customer experience is what buyers are going to use to make purchase decisions.” (Source: Gartner’s Hype Cycle for E-Commerce, 2013) PIM 7.1 comes with new editable channel previews. This helps business users like marketing, translators, merchandisers or product managers to envistion how the product looks at the cutomer facing webshop, catalog or other touchpoint. Getting products live online within seconds, we is key because the customer always wants it now. For eCommerce product data Informatica PIM is certified for IBM WebSphere Commerce to get products ready for ecommerce within seconds.
The editable channel previews helps professionals in product management, merchandizing, marketing and ecommerce to envision their products as customers are facing it. The way of “what you see is what you get (WYSIWYG)” product data management improves customer shopping experience with best and authentic information. With the new eCommerce integration, Informatica speeds up the time to market in eBusiness. The new standard (certified by IBM WebSphere Commerce enables a live update of eShops with real time integration.
The growing need for fast and s ecure collaboration across globally acting enterprises is addressed by the Business Process Management tool of Informatica, which can now be used for PIM customers.
Intelligent insights: How relevant is our offering to your customers?
This is the age of annoyance and information overload. Each day, the average person has to handle more than 7,000 pieces of information. Only 25% of Americans say there are brand loyal. That means brands and retailers have to earn every new sale in a transparent world. In this context information needs to be relevant to the recipient.
- Where do the data come from? How can product information auto-cleansed and characterizing into a taxonomy?
- Is the supplier performance hitting our standards?
- How can we mitigate risks like hidden costs and work with trusted suppliers only?
- How can we and build customer segmentations for marketing?
- How to build product personalization and predict the next logical buy of the customer?
It is all about The Right product. To the Right Person. In the Right Way. Learn more about the vision of the Intelligent Data Plaform.
Informatica PIM Builds the Basis of Real Time Commerce Information
All these innovations speed up the new product introduction and collaboration massively. As buyers today are always online and connected, PIM helps our customer to serve the informed purchase journey, with the right information in at the right touch point and in real time.
- Real-time commerce (certification with IBM WebSphere Commerce), which eliminates shelf lag
- Editable channel preview which help to envision how customers view the product
- Data quality dashboards for improved conversion power, which means selling more with better information
- Business Process Management for better collaboration throughout the enterprise
- Accelerator for global data synchronization (GDSN like GS1 for food and CPG) – which helps to improve quality of data and fulfill legal requirements
All this makes merchandizers more productive and increases average spend per customer.
Manufacturers and retailers are constantly being challenged by the market. They continually seek ways to optimize their business processes and improve their margins. They face a number of challenges. These challenges include the following:
- Delays in getting products ordered
- Delays in getting products displayed on the shelf
- Out of stock issues
- Constant pressure to comply with how information is exchanged with local partners
- Pressure to comply with how information is exchanged with international distribution partners
Recently, new regulations have been mandated by governing bodies. These bodies include the US Food and Drug Administration (FDA) as well as European Union (EU) entities. One example of these regulations is EU Regulation 1169/2011. This regulation focuses on nutrition and contents for food labels.
How much would it mean to a supplier if they could reduce their “time to shelf?” What would it mean if they could improve their order and item administration?
If you’re a supplier, and if these improvements would benefit you, you’ll want to explore solutions. In particular, you’d benefit from a solution which could do the following:
- Make your business available to the widest possible audience, both locally and internationally
- Eliminate the need to build individual “point to point” interfaces
- Provide the ability to communicate both “one on one” with a partner and broadly with othe
- Eliminate product data inconsistencies
- Improve data quality
- Improve productivity
One such solution that can accomplish these things is Informatica’s combination of PIM and GDSN.
Manufacturers of CPG or food products have to adhere to strict compliance regulations. The new EU Regulation 1169/2011 on the provision of food information to consumers changes existing legislation on food labeling. The new rules take effect on December 13, 2014. The obligation to provide nutrition information will apply from 13 December 2016. The US Food & Drug Administration (FDA) enforces record keeping and the Hazard Analysis & Critical Control Points (HACCP).
In addition to that information standards are key factor feedbug distributors and retailers as our customer Vitakraft says:
“For us as a manufacturer of pet food, the retailers and distributors are key distribution channels. With the GS1 Accelerator for Informatica PIM we connect with the Global Data Synchronization Network (GDSN). Leveraging GDSN we serve our retail and distribution partners with product information for all sales channels. Informatica, helps us to meet the expectations of our business partners and customers in the e-business.”
Heiko Cichala, Product & Electronic Data Interchange Management
On one side retailers like supermarkets, expect from their vendors or manufacturers to get all required information which is required legally – on the other side they are looking for strategies to leverage information for better customer service and experience (Check out “the supermarket of tomorrow”).
Companies, like German food retailer Edeka offer an app for push marketing, or help matching customer profiles of dietary or allergy profiles with QR-code scanned products on the shopping list within the supermarket app.
The Informatica GS1 Accelerator
The GS1 Accelerator from Informatica offers suppliers and manufacturers the capability to ensure their data is not only of high quality but also confirms to GS1 standards. The Informatica GDSN accelerator offers the possibility to provide this high quality data directly to a certified data pool for synchronisation with their trading partners.
The quality of the data can be ensured by the Data Quality rules engine of the PIM system. It leverages the Global Product Classification hierarchy that conforms to GS1 standards for communication with the data pools.
All GDSN related activities is encapsulated within PIM can be initiated from there itself. The product data can easily be transferred to the data pool and released to a specific trading partner or made public for all recipients of a Target Market.
In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.
As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.
As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:
- Damage to the company’s reputation
- A reduction of customer trust
- Financial costs associated with the investigation of the loss
- Possible governmental fines
- Possible restitution costs
To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.
There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.
Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked.
Before I joined Informatica I worked for a health plan in Boston. I managed several programs including CMS Five Start Quality Rating System and Risk Adjustment Redesign. We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.
As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s. This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management. Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.
Why is risk adjustment important?
Risk Adjustment is increasingly entrenched in the healthcare ecosystem. Originating in Medicare Advantage, it is now applicable to other areas. Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for members.
What are a few examples of the increasing importance of risk adjustment?
1) Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries. CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement. Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.
2) Marketplace members will also require risk adjustment calculations. After the first three years, the three “R’s” will dwindle down to one ‘R”. When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data. CMS processing delays make risk adjustment even more difficult. A Health Plan’s ability to manage this information will be critical to success.
3) Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.
With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.
How can payers better enable risk adjustment?
- Facilitate timely analysis of accurate data from a variety of sources, in any format.
- Integrate and reconcile data from initial receipt through adjudication and submission.
- Deliver clean and normalized data to business users.
- Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
- Apply natural language processing to capture insights otherwise trapped in text based notes.
With clean, safe and connected data, health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).
What will clean, safe and connected data allow?
- Allow risk adjustment to become a core competency and source of differentiation. Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
- Educate, motivate and engage providers with accurate reporting. Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver. Clear and trusted feedback to physicians will contribute to a strong partnership.
- Improve patient care, reduce medical cost, increase quality ratings and engage members.
A full house, lots of funny names and what does it all mean?
Cloudera, Appfluent and Informatica partnered today at Informatica World in Las Vegas to deliver together a one day training session on Introduction to Hadoop and Big Data. Technologies overview, best practices, and how to get started were on the agenda. Of course, we needed to start off with a little history. Processing and computing was important in the old days. And, even in the old days it was hard to do and very expensive.
Today it’s all about scalability. What Cloudera does is “Spread the Data and Spread the Processing” with Hadoop optimized for scanning lots of data. It’s the Hadoop File System (HDFS) that slices up the data. It takes a slice of data and then takes another slice. Map Reduce is then used to spread the processing. How does spreading the data and the processing help us with scalability?
When we spread the data and processing we need to index the data. How do we do this? We add the Get Puts. That’s Get a Row, Put a Row. Basically this is what helps us find a row of data easily. The potential for processing millions of rows of data today is more and more a reality for many businesses. Once we can find and process a row of data easily we can focus on our data analysis.
Data Analysis, what’s important to you and your business? Appfluent gives us the map to identify data and workloads to offload and archive to Hadoop. It helps us assess what is not necessary to load into the Data Warehouse. The Data Warehouse today with the exponential growth in volume and types of data will soon cost too much unless we identify what to load and offload.
Informatica has the tools to help you with processing your data. Tools that understand Hadoop and that you already use today. This helps you with a managing these volumes of data in a cost effective way. Add to that the ability to reuse what you have already developed. Now that makes these new tools and technologies exciting.
In this Big Data and Hadoop session, #INFA14, you will learn:
- Common terminologies used in Big Data
- Technologies, tools, and use cases associated with Hadoop
- How to identify and qualify the most appropriate jobs for Hadoop
- Options and best practices for using Hadoop to improve processes and increase efficiency
Live action at Informatica World 2014, May 12 9:00 am – 5:00 pm and updates at:
The term “big data” has been bandied around so much in recent months that arguably, it’s lost a lot of meaning in the IT industry. Typically, IT teams have heard the phrase, and know they need to be doing something, but that something isn’t being done. As IDC pointed out last year, there is a concerning shortage of trained big data technology experts, and failure to recognise the implications that not managing big data can have on the business is dangerous. In today’s information economy, as increasingly digital consumers, customers, employees and social networkers we’re handing over more and more personal information for businesses and third parties to collate, manage and analyse. On top of the growth in digital data, emerging trends such as cloud computing are having a huge impact on the amount of information businesses are required to handle and store on behalf of their customers. Furthermore, it’s not just the amount of information that’s spiralling out of control: it’s also the way in which it is structured and used. There has been a dramatic rise in the amount of unstructured data, such as photos, videos and social media, which presents businesses with new challenges as to how to collate, handle and analyse it. As a result, information is growing exponentially. Experts now predict a staggering 4300% increase in annual data generation by 2020. Unless businesses put policies in place to manage this wealth of information, it will become worthless, and due to the often extortionate costs to store the data, it will instead end up having a huge impact on the business’ bottom line. Maxed out data centres Many businesses have limited resource to invest in physical servers and storage and so are increasingly looking to data centres to store their information in. As a result, data centres across Europe are quickly filling up. Due to European data retention regulations, which dictate that information is generally stored for longer periods than in other regions such as the US, businesses across Europe have to wait a very long time to archive their data. For instance, under EU law, telecommunications service and network providers are obliged to retain certain categories of data for a specific period of time (typically between six months and two years) and to make that information available to law enforcement where needed. With this in mind, it’s no surprise that investment in high performance storage capacity has become a key priority for many. Time for a clear out So how can organisations deal with these storage issues? They can upgrade or replace their servers, parting with lots of capital expenditure to bring in more power or more memory for Central Processing Units (CPUs). An alternative solution would be to “spring clean” their information. Smart partitioning allows businesses to spend just one tenth of the amount required to purchase new servers and storage capacity, and actually refocus how they’re organising their information. With smart partitioning capabilities, businesses can get all the benefits of archiving the information that’s not necessarily eligible for archiving (due to EU retention regulations). Furthermore, application retirement frees up floor space, drives the modernisation initiative, allows mainframe systems and older platforms to be replaced and legacy data to be migrated to virtual archives. Before IT professionals go out and buy big data systems, they need to spring clean their information and make room for big data. Poor economic conditions across Europe have stifled innovation for a lot of organisations, as they have been forced to focus on staying alive rather than putting investment into R&D to help improve operational efficiencies. They are, therefore, looking for ways to squeeze more out of their already shrinking budgets. The likes of smart partitioning and application retirement offer businesses a real solution to the growing big data conundrum. So maybe it’s time you got your feather duster out, and gave your information a good clean out this spring?
If you haven’t updated your B2B integration capabilities in the past five years, are you at risk of being left behind? This is the age of superior customer experience and rapid time-to-value so speedy customer on-boarding and support of specialized integration services means the difference between winning and losing business. A health check starts with asking some simple questions: (more…)
Bring The Outside In: Why Integrating External Data Sources Should Be Your Next Data integration Project
We recently had Ted Friedman, VP Distinguished Analyst, from featured analyst firm Gartner speak about what companies can do to extend their internal data integration strategies to included integrating external data sources. If the next generation of your DI projects includes inter-enterprise data sources, you’re in good company. He mentioned 28% of the time, data integration tools are being used for inter-enterprise integration. That is almost the same rate and Master Data Management (MDM) and Data Services use cases. Ted is also predicted the inter-enterprise use case will continue to grow as more integration projects will include data outside the firewall. (more…)
In a recent Aberdeen research, they found that 95% of respondents (of 122 responses) replied on some level of manual processing in order to integration external data sources. Manual processing to integrate external data is time consuming, expensive and error prone so why do so many do it? Well, they often have little choice. If you look deeper, most of the time these data exchanges are with small partners and small partner enablement is a significant challenge for most organizations. For the most part, (more…)