Tag Archives: data-first

Data Governance, Transparency and Lineage with Informatica and Hortonworks

Data GovernanceInformatica users leveraging HDP are now able to see a complete end-to-end visual data lineage map of everything done through the Informatica platform. In this blog post, Scott Hedrick, director Big Data Partnerships at Informatica, tells us more about end-to-end visual data lineage.

Hadoop adoption continues to accelerate within mainstream enterprise IT and, as always, organizations need the ability to govern their end-to-end data pipelines for compliance and visibility purposes. Working with Hortonworks, Informatica has extended the metadata management capabilities in Informatica Big Data Governance Edition to include data lineage visibility of data movement, transformation and cleansing beyond traditional systems to cover Apache Hadoop.

Informatica users are now able to see a complete end-to-end visual data lineage map of everything done through Informatica, which includes sources outside Hortonworks Data Platform (HDP) being loaded into HDP, all data integration, parsing and data quality transformation running on Hortonworks and then loading of curated data sets onto data warehouses, analytics tools and operational systems outside Hadoop.

Regulated industries such as banking, insurance and healthcare are required to have detailed histories of data management for audit purposes. Without tools to provide data lineage, compliance with regulations and gathering the required information for audits can prove challenging.

With Informatica, the data scientist and analyst can now visualize data lineage and detailed history of data transformations providing unprecedented transparency into their data analysis. They can be more confident in their findings based on this visibility into the origins and quality of the data they are working with to create valuable insights for their organizations. Web-based access to visual data lineage for analysts also facilitates team collaboration on challenging and evolving data analytics and operational system projects.

The Informatica and Hortonworks partnership brings together leading enterprise data governance tools with open source Hadoop leadership to extend governance to this new platform. Deploying Informatica for data integration, parsing, data quality and data lineage on Hortonworks reduces risk to deployment schedules.

A demo of Informatica’s end-to-end metadata management capabilities on Hadoop and beyond is available here:

Learn More

  • A free trial of Informatica Big Data Edition in the Hortonworks Sandbox is available here .
Share
Posted in B2B, Data Governance, Data Security, Data Services | Tagged , , , , | Leave a comment

How Great Data in the Cloud Can Make for Greater Business Outcomes

Great Cloud Data Improves Business Outcomes

Great Cloud Data Improves Business Outcomes

The technology you use in your business can either help or hinder your business objectives.

In the past, slow and manual processes had an inhibiting effect on customer services and sales interactions, thus dragging down the bottom line.

Now, with cloud technology and customers interacting at record speeds, companies expect greater returns from each business outcome. What do I mean when I say business outcome?

Well according to Bluewolf’s State of Salesforce Report, you can split these into four categories: acquisition, expansion, retention and cost reduction.

With the right technology and planning, a business can speedily acquire more customers, expand to new markets, increase customer retention and ensure they are doing all of this efficiently and cost effectively. But what happens when the data or the way you’re interacting with these technologies grow unchecked, and/or becomes corrupted and unreliable.

With data being the new fuel for decision-making, you need to make sure it’s clean, safe and reliable.

With clean data, Salesforce customers, in the above-referenced Bluewolf survey, reported efficiency and productivity gains (66%), improved customer experience (34%), revenue growth (32%) and cost reduction (21%) in 2014.

It’s been said that it costs a business 10X more to acquire new customers than it does to retain existing ones. But, despite the additional cost, real continued growth requires the acquisition of new customers.

Gaining new customers, however, requires a great sales team who knows what and to whom they’re selling. With Salesforce, you have that information at your fingertips, and the chance to let your sales team be as good as they can possibly be.

And this is where having good data fits in and becomes critically important. Because, well, you can have great technology, but it’s only going to be as good as the data you’re feeding it.

The same “garbage in, garbage out” maxim holds true for practically any data-driven or –reliant business process or outcome, whether it’s attracting new customers or building a brand. And with the Salesforce Sales Cloud and Marketing Cloud you have the technology to both attract new customers and build great brands, but if you’re feeding your Clouds with inconsistent and fragmented data, you can’t trust that you’ve made the right investments or decisions in the right places.

The combination of good data and technology can help to answer so many of your critical business questions. How do I target my audience without knowledge of previous successes? What does my ideal customer look like? What did they buy? Why did they buy it?

For better or worse, but mainly better, answering those questions with just your intuition and/or experience is pretty much out of the question. Without the tool to look at, for example, past campaigns and sales, and combining this view to see who your real market is, you’ll never be fully effective.

The same is true for sales. Without the right Leads, and the ability to interact with these Leads effectively, i.e., having the right contact details, company, knowing there’s only one version of that record, can make the discovery process a long and painful one.

But customer acquisition isn’t the only place where data plays a vital role.

When expanding to new markets or upselling and cross selling to existing customers, it’s the data you collect and report on that will help inform where you should focus your efforts.

Knowing what existing relationships you can leverage can make the difference between proactively offering solutions to your customers and losing them to a competitor. With Salesforce’s Analytics Cloud, this visibility that used to take weeks and months to view can now be put together in a matter of minutes. But how do you make strategic decisions on what market to tap into or what relationships to leverage, if you can only see one or two regions? What if you could truly visualize how you interact with your customers?  Or see beyond the hairball of interconnected business hierarchies and interactions to know definitively what subsidiary, household or distributor has what? Seeing the connections you have with your customers can help uncover the white space that you could tap into.

Naturally this entire process means nothing if you’re not actually retaining these customers. Again, this is another area that is fuelled by data. Knowing who your customers are, what issues they’re having and what they could want next could help ensure you are always providing your customer with the ultimate experience.

Last, but by no means least, there is cost reduction. Only by ensuring that all of this data is clean — and continuously cleansed — and your Cloud technologies are being fully utilized, can you then help ensure the maximum return on your Cloud investment.

Learn more about how Informatica Cloud can help you maximize your business outcomes through ensuring your data is trusted in the Cloud.

Share
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Cloud Data Management, Data First | Tagged , , , | Leave a comment

Gaining a Data-First Perspective with Salesforce Wave

Gaining a Data-First Perspective with Salesforce Wave

Data-First with Salesforce Wave

Salesforce.com made waves (pardon the pun) at last month’s Dreamforce conference when it unveiled the Salesforce Wave Analytics Cloud. You know Big Data has reached prime-time when Salesforce, which has a history of knowing when to enter new markets, decides to release a major analytics service.

Why now? Because companies need help making sense of the data deluge, Salesforce’s CEO Marc Benioff said at Dreamforce: “Did you know 90% of the world’s data was created in the last two years? There’s going to be 10 times more mobile data by 2020, 19 times more unstructured data, and 50 times more product data by 2020.” Average business users want to understand what that data is telling them, he said. Given Salesforce’s marketing expertise, this could be the spark that gets mainstream businesses to adopt the Data-First perspective I’ve been talking about.

As I’ve said before, a Data First POV shines a light on important interactions so that everyone inside a company can see and understand what matters. As a trained process engineer, I can tell you, though, that good decisions depend on great data — and great data doesn’t just happen: At the most basic level, you have to clean it, relate it, connect and secure it  — so that information from, say, SAP, can be viewed in the same context as data from Salesforce. Informatica obviously plays a role in this. If you want to find out more, click on this link to download our Salesforce Integration for Dummies brochure.

But that’s the basics for getting started. The bigger issue — and the one so many people seem to have trouble with — is deciding which metrics to explore. Say, for example, that the sales team keeps complaining about your marketing leads. Chances are, it’s a familiar complaint. How do you discover what’s really the problem?

One obvious place to start to first look at the conversation rates for every sales rep and group. Next explore the marketing leads they do accept such as deal size, product type or customer category. Now take it deeper. Examine which sales reps like to hunt for new customers and which prefer to mine their current base. That will tell you if you’re sending opportunities to the right profiles.

The key is never looking at the sales organization as a whole. If it’s EMEA, for instance, have a look to see how France is doing selling to emerging markets vs. the team in Germany. These metrics are digital trails of human behavior. Data First allows you to explore that behavior and either optimize it or change it.

But for this exploration to pay off, you actually have to do some of the work. You can’t just job it out to an analyst. This exercise doesn’t become meaningful until you are mentally engaged in the process. And that’s how it should be: If you are a Data First company, you have to be a Data First leader.

Share
Posted in Data Archiving, Data First, Data Governance, Data Integration | Tagged , , , , | Leave a comment

Optimizing Supply Chain with Data Governance

Optimizing Supply Chain with Data Governance

Optimizing Supply Chain with Data Governance

Last week I met with a customer who recently completed a few data fueled supply chain projects. Informatica data integration and master data management are part of the solution architecture. I’m not going to name the client at this early stage but want to share highlights from our Q&A.

Q: What was the driver for this project?

A: The initiative fell out of a procure-to-pay (P2P) initiative.  We engaged a consulting firm to help centralize Accounts Payable operations.  One required deliverable was an executive P2P dashboard. This dashboard would provide enterprise insights by relying on the enterprise data warehousing and business intelligence platform.

Q: What did the dashboard illustrate?

The dashboard integrated data from many sources to provide a single view of information about all of our suppliers. By visualizing this information in one place, we were able to rapidly gain operational insights. There are approximately 30,000 suppliers in the supplier master who either manufacture, or distribute, or both over 150,000 unique products.

Q: From which sources is Informatica consuming data to power the P2P dashboard?

A: There are 8 sources of data:

3 ERP Systems:

  1. Lawson
  2. HBOC STAR
  3. Meditech

4 Enrichment Sources:

  1. Dun & Bradstreet – for associating suppliers together from disparate sources.
  2. GDSN – Global Data Pool for helping to cleanse healthcare products.
  3. McKesson Pharmacy Spend – spend file from third party pharmaceutical distributor Helps capture detailed pharmacy spend which we procure from this third party.
  4. Office Depot Spend – spend file from third party office supply distributor.  Helps capture detailed pharmacy spend.
  5. MedAssets – third party group purchasing organization (GPO) who provides detailed contract pricing.

Q: Did you tackle clinical scenarios first?

A: No, well we certainly have many clinical scenarios we want to explore like cost per procedure per patient we knew that we should establish a few quick, operational wins to gain traction and credibility.

Q: Great idea – capturing quick wins is certainly the way we are seeing customers have the most success in these transformative projects. Where did you start?

A: We started with supply chain cost containment; increasing pressures on healthcare organizations to reduce cost made this low hanging fruit the right place to start. There may be as much as 20% waste to be eliminated through strategic and actionable analytics.

Q: What did you discover?

A: Through the P2P dashboard, insights were gained into days to pay on invoices as well as early payment discounts and late payment penalties. With the visualization we quickly saw that we were paying a large amount of late fees. With this awareness, we dug into why the late fees were so high. What was discovered is that, with one large supplier, the original payment terms were net 30 but that in later negotiations terms were changed to 20 days. Late fees were accruing after 20 days. Through this complete view we were able to rapidly hone in on the issue and change operations — avoiding costly late fees.

Q: That’s a great example of straight forward analytics powered by an integrated view of data, thank you. What’s a more complex use case you plan to tackle?

A: Now that we have the systems in place along with data stewardship, we will start to focus on clinical supply chain scenarios like cost per procedure per patient. We have all of the data in one data warehouse to answer questions like – which procedures are costing the most, do procedure costs vary by clinician? By location? By supply? – and what is the outcome of each of these procedures? We always want to take the right and best action for the patient.

We were also able to identify where negotiated payment discounts were not being taken advantage of or where there were opportunities to negotiate discounts.

These insights were revealed through the dashboard and immediate value was realized the first day.

Fueling knowledge with data is helping procurement negotiate the right discounts, i.e. they can seek discounts on the most used supplies vs discounts on supplies rarely used. Think of it this way… you don’t want to get a discount on OJ and if you are buying milk.

Q: Excellent example and metaphor. Let’s talk more about stewardship, you have a data governance organization within IT that is governing supply chain?

A: No, we have a data governance team within supply chain… Supply chain staff that used to be called “content managers” now “data stewards”. They were doing the stewardship work of defining data, its use, its source, its quality before but it wasn’t a formally recognized part of their jobs… now it is. Armed with Informatica Data Director they are managing the quality of supply chain data across four domains including suppliers/vendors, locations, contracts and items. Data from each of these domains resides in our EMR, our ERP applications and in our ambulatory EMR/Practice Management application creating redundancy and manual reconciliation effort.

By adding Master Data Management (MDM) to the architecture, we were able to centralize management of master data about suppliers/vendors, items, contracts and locations, augment this data with enrichment data like that from D&B, reduce redundancy and reduce manual effort.

MDM shares this complete and accurate information with the enterprise data warehouse and we can use it to run analytics against. Having a confident, complete view of master data allows us to trust analytical insights revealed through the P2P dashboard.

Q: What lessons learned would you offer?

A: Having recognized operational value, I’d encourage health systems to focus on data driven supply chain because there are savings opportunities through easier identification of unmanaged spend.

I really enjoyed learning more about this project with valuable, tangible and nearly immediate results. I will keep you posted as the customer moves onto the next phase. If you have comments or questions, leave them here.

Share
Posted in B2B, Data First, Data Governance, Retail | Tagged , , , | 1 Comment

PIM is not Product MDM, Part 2

Part 1 of this blog touched on the differences between PIM and Product MDM.  Since both play a role in ensuring the availability of high quality product data, it is easy to see the temptation to extend the scope of either product to play a more complete part.  However, there are risks involved in customising software.  PIM and MDM are not exceptions, and any customisations will carry some risk.

In the specific case of looking to extend the role of PIM, the problems start if you just look at the data and think:  “oh, this is just a few more product attributes to add”.  This will not give you a clear picture of the effort or risk associated with customisations.  A complete picture requires looking beyond the attributes as data fields, and considering them in context:  which processes and people (roles) are supported by these attributes?

Recently we were asked to assess the risk of PIM customisation for a customer.  The situation was that data to be included in PIM was currently housed in separate, home grown and aging legacy systems.  One school of thought was to move all the data, and their management tasks, into PIM and retire the three systems.  That is, extending the role of PIM beyond a marketing application and into a Product MDM role.  In this case, we found three main risks of customising PIM for this purpose.  Here they are in more detail:

1. Decrease speed of PIM deployment

  • Inclusion of the functionality (not just the data) will require customisations in PIM, not just additional attributes in the data model.
    • Logic customisations are required for data validity checks, and some value calculations.
    • Additional screens, workflows, integrations and UI customisations will be required for non-marketing roles
    • PIM will become the source for some data, which is used in critical operational systems (e.g. SAP).  Reference checks & data validation cannot be taken lightly due to risks of poor data elsewhere.
  • Bottom line:  A non-standard deployment with drive up implementation cost, time and risk.

2.  Reduce marketing agility

  • In the case concerned, whilst the additional data was important to marketing, it is primarily supporting by non-marketing users and processes including Product Development, Sales and Manufacturing
  • These systems are key systems in their workflow in terms of creating and distributing technical details of new products to other systems, e.g. SAP for production
  • If the systems are retired and replaced with PIM, these non-marketing users will need to be equal partners in PIM:
    • Require access and customised roles
    • Influence over configuration
    • Equal vote in feature/function prioritisation
  • Bottom Line:  Marketing will no longer completely own the PIM system, and may have to sacrifice new functionality to prioritise supporting other roles.

3.  Risk of marketing abandoning the hybrid tool in the mid-term

  • An investment in PIM is usually an investment by Marketing to help them rapidly adapt to a dynamic external market.
    • System agility (point 2) is key to rapid adaption, as is the ability to take advantage of new features within any packaged application.

PiM

  • As more customisations are made, the cost of upgrades can become prohibitive, driven by the cost to upgrade customisations.
    • Cost often driven by consulting fees to change what could be poorly documented code.
    • Risk of falling behind on upgrades, and hence sacrificing access to the newest PIM functionality
  • If upgrades are more expensive than new tools, PIM will be abandoned by Marketing, and they will invest in a new tool.
  • Bottom line:  In a worst case scenario, a customised PIM solution could be left supporting non-marketing functionality with Marketing investing in a new tool.

The first response to the last bullet point is normally “no they wouldn’t”.  Unfortunately this is a pattern both I and some of my colleagues have seen in the area of marketing & eCommerce applications.  The problem is that these areas are so fast moving, that nobody can afford to fall behind in terms of new functionality.  If upgrades are large projects which need lengthy approval and implementation cycles, marketing is unlikely to wait.  It is far easier to start again with a smaller budget under their direct control.  (Which is where PIM should be in the first place.)

In summary:

  • Making PIM look and behave like Product MDM could have some undesirable consequences – both in the short term (current deployment) and in the longer term (application abandonment).
  • A choice for customising PIM vs. enhancing your landscape with Product MDM should be made not on data attributes alone.
  • Your business and data processes should guide you in terms of risk assessment for customisation of your PIM solution.

Bottom Line:  If the risks seem too large, then consider enhancing your IT landscape with Product MDM.  Trading PIM cost & risk for measurable business value delivered by MDM will make a very attractive business case.

 

Share
Posted in PiM, Product Information Management | Tagged , , , | 2 Comments

Data First: Five Tips To Reduce the Risk of A Breach

Reduce the Risk of A Breach

Reduce the Risk of A Breach

This article was originally published on www.federaltimes.com

November – that time of the year. This year, November 1 was the start of Election Day weekend and the associated endless barrage of political ads. It also marked the end of Daylight Savings Time. But, perhaps more prominently, it marked the beginning of the holiday shopping season. Winter holiday decorations erupted in stores even before Halloween decorations were taken down. There were commercials and ads, free shipping on this, sales on that, singing, and even the first appearance of Santa Claus.

However, it’s not all joy and jingle bells. The kickoff to this holiday shopping season may also remind many of the countless credit card breaches at retailers that plagued last year’s shopping season and beyond. The breaches at Target, where almost 100 million credit cards were compromised, Neiman Marcus, Home Depot and Michael’s exemplify the urgent need for retailers to aggressively protect customer information.

In addition to the holiday shopping season, November also marks the next round of open enrollment for the ACA healthcare exchanges. Therefore, to avoid falling victim to the next data breach, government organizations as much as retailers, need to have data security top of mind.

According to the New York Times (Sept. 4, 2014), “for months, cyber security professionals have been warning that the healthcare site was a ripe target for hackers eager to gain access to personal data that could be sold on the black market. A week before federal officials discovered the breach at HealthCare.gov, a hospital operator in Tennessee said that Chinese hackers had stolen personal data for 4.5 million patients.”

Acknowledging the inevitability of further attacks, companies and organizations are taking action. For example, the National Retail Federation created the NRF IT Council, which is made up of 130 technology-security experts focused on safeguarding personal and company data.

Is government doing enough to protect personal, financial and health data in light of these increasing and persistent threats? The quick answer: no. The federal government as a whole is not meeting the data privacy and security challenge. Reports of cyber attacks and breaches are becoming commonplace, and warnings of new privacy concerns in many federal agencies and programs are being discussed in Congress, Inspector General reports and the media. According to a recent Government Accountability Office report, 18 out of 24 major federal agencies in the United States reported inadequate information security controls. Further, FISMA and HIPAA are falling short and antiquated security protocols, such as encryption, are also not keeping up with the sophistication of attacks. Government must follow the lead of industry and look for new and advanced data protection technologies, such as dynamic data masking and continuous data monitoring to prevent and thwart potential attacks.

These five principles can be implemented by any agency to curb the likelihood of a breach:

1. Expand the appointment and authority of CSOs and CISOs at the agency level.

2. Centralize the agency’s data privacy policy definition and implement on an enterprise level.

3. Protect all environments from development to production, including backups and archives.

4. Data and application security must be prioritized at the same level as network and perimeter security.

5. Data security should follow data through downstream systems and reporting.

So, as the season of voting, rollbacks, on-line shopping events, free shipping, Black Friday, Cyber Monday and healthcare enrollment begins, so does the time for protecting personal identifiable information, financial information, credit cards and health information. Individuals, retailers, industry and government need to think about data first and stay vigilant and focused.

This article was originally published on www.federaltimes.com. Please view the original listing here

Share
Posted in B2B, B2B Data Exchange, Data First, Data Security, Data Services | Tagged , , , | Leave a comment

The Apple Watch – the Newest Data-First Device

Data First Apple Watch

The Data-First Consumer

I have to admit it: I’m intrigued by the new Apple Watch. I’m not going to go into all the bells and whistles, which Apple CEO Tim Cook describes as a “mile long.” Suffice it to say, that Apple has once again pushed the boundaries of what an existing category can do.

The way I see it, the biggest impact of the Apple Watch will come from how it will finally make data fashionable. For starters, the three Apple Watch models and interchangeable bands will actually make it hip to wear a watch again. But I think the ramifications of this genuinely good-looking watch go well beyond the skin deep. The Cupertino company has engineered its watch and its mobile software to recognize related data and seamlessly share it across relevant apps. And those capabilities allow it to, for instance, monitor our fitness and health, show us where we parked the car, open the door to our hotel room and control our entertainment centers.

Think what this could mean for any company with a Data-First point of view. I like to say that a data-first POV changes everything. With it, companies can unleash the killer app, killer marketing campaign and killer sales organization.

The Apple Watch

The Apple Watch

The Apple Watch finally gives people a reason to have that killer app with them at all times, wherever they are and whatever they’re doing. Looked at a different way, it could unleash a new culture of Data-Only consumers: People who rely on being told what they need to know, in the right context.

But while Apple may the first to push this Data-First POV in unexpected ways, history has shown they won’t be the last. It’s time for every company to tap into the newest fashion accessory, and make data their first priority.

Share
Posted in Data Aggregation, Data Integration | Tagged , | Leave a comment

Marketing in a Data-Driven World… From Mad Men to Mad Scientist

Are Marketers More Mad Men or Mad Scientists?

I have been in marketing for over two decades. As I meet people in social situations, on airplanes, and on the sidelines at children’s soccer games, and they ask what it is I do, I get responses that constantly amuse me and lead me to the conclusion that the general public has absolutely no idea what a marketer does. I am often asked things like “have you created any commercials that I might have seen?” and peppered with questions that evoke visions of Mad Men-esque 1960’s style agency work and late night creative martini-filled pitch sessions.

I admit I do love to catch the occasional Mad Men episode, and a few weeks ago, I stumbled upon one that had me chuckling. You may remember the one that Don Draper is pitching a lipstick advertisement and after persuading the executive to see things his way, he says something along the lines of, “We’ll never know, will we? It’s not a science.”

How the times have changed. I would argue that in today’s data-driven world, marketing is no longer an art and is now squarely a science.

Sure, great marketers still understand their buyers at a gut level, but their hunches are no longer the impetus of a marketing campaign. Their hunches are now the impetus for a data-driven, fact-finding mission, and only after the analysis has been completed and confirms or contradicts this hunch, is the campaign designed and launched.

This is only possible because today, marketers have access to enormous amounts of data – not just the basic demographics of years past. Most marketers realize that there is great promise in all of that data, but it’s just too complicated, time-consuming, and costly to truly harness it. How can you really ever make sense of the hundreds of data sources and tens of thousands of variables within these sources? Social media, web analytics, geo-targeting, internal customer and financial systems, in house marketing automation systems, third party data augmentation in the cloud… the list goes on and on!

How can marketers harness the right data, in the right way, right away? The answer starts with making the commitment that your marketing team – and hopefully your organization as a whole – will think “data first”. In the coming weeks, I will focus on what exactly thinking data first means, and how it will pay dividends to marketers.

In the mean time, I will make the personal commitment to be more patient about answering the silly questions and comments about marketers.

Now, it’s your turn to comment… 

What are some of the most amusing misconceptions about marketers that you’ve encountered?

– and –

Do you agree? Is marketing an art? A science? Or somewhere in between?

Are Marketers More Mad Men or Mad Scientists?

Share
Posted in Business Impact / Benefits, CMO, Operational Efficiency | Tagged , , , , , , , | Leave a comment