Category Archives: Data Governance

Talk Amongst Yourselves: Why Twitter Needs #DataChat

Listeng to #DataChatWithin Government organizations, technologists are up against a wall of sound. In one ear, they hear consumers cry for faster, better service.

In the other, they hear administrative talk of smaller budgets and scarcer resources.

As stringent requirements for both transparency and accountability grow, this paradox of pressure increases.

Sometimes, the best way to cope is to TALK to somebody.

What if you could ask other data technologists candid questions like:

  • Do you think government regulation helps or hurts the sharing of data?
  • Do you think government regulators balance the privacy needs of the public with commercial needs?
  • What are the implications of big data government regulation, especially for users?
  • How can businesses expedite the government adoption of the cloud?
  • How can businesses aid in the government overcoming the security risks associated with the cloud?
  • How should the policy frameworks for handling big data differ between the government and the private sector?

What if you could tell someone who understood? What if they had sweet suggestions, terrific tips, stellar strategies for success? We think you can. We think they will.

That’s why Twitter needs a #DataChat.

Twitter Needs #DataChat

Third Thursdays, 3:00 PM EST

What on earth is a #DataChat?
Good question. It’s a Twitter Chat – A public dialog, at a set time, on a set topic. It’s something like a crowd-sourced discussion. Any Twitter user can participate simply by including the applicable hashtag in each tweet. Our hashtag is #DataChat. We’ll connect on Twitter, on the third Thursday of each month to share struggles, victories and advice about data governance. We’re going to begin this week, Thursday April 17, at 3:00 PM Eastern Time. For our first chat, we are going to discuss topics that relate to data technologies in government organizations.

What don’t you join us? Tell us about it. Mark your calendar. Bring a friend.

Because, sometimes, you just need someone to talk to.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance, Governance, Risk and Compliance, Public Sector | Tagged , , , , | Leave a comment

Should Legal (Or Anyone Else Outside of IT) Be in Charge of Big Data?

shutterstock_145160692A few years back, there was a movement in some businesses to establish “data stewards” – individuals who would sit at the hearts of the enterprise and make it their job to assure that data being consumed by the organization is of the highest possible quality, is secure, is contextually relevant, and capable of interoperating across any applications that need to consume it. While the data steward concept came along when everything was relational and structured, these individuals are now earning their pay when it comes to managing the big data boom.

The rise of big data is creating more than simple headaches for data stewards, it is creating turf wars across enterprises.  As pointed out in a recent article in The Wall Street Journal, there isn’t yet a lot of clarity as to who owns and cares for such data. Is it IT?  Is it lines of business?  Is it legal? There are arguments that can be made for all jurisdictions.

In organizations these days, for example, marketing executives are generating, storing and analyzing large volumes of their own data within content management systems and social media analysis solutions. Many marketing departments even have their own IT budgets. Along with marketing, of course, everyone else within enterprises is seeking to pursue data analytics to better run their operations as well as foresee trends.

Typically, data has been under the domain of the CIO,  the person who oversaw the collection, management and storage of information. In the Wall Street Journal article, however, it’s suggested that legal departments may be the best caretakers of big data, since big data poses a “liability exposure,” and legal departments are “better positioned to understand how to use big data without violating vendor contracts and joint-venture agreements, as well as keeping trade secrets.”

However, legal being legal, it’s likely that insightful data may end up getting locked away, never to see the light of day. Others may argue IT department needs to retain control, but there again, IT isn’t trained to recognize information that may set the business on a new course.

Focusing on big data ownership isn’t just an academic exercise. The future of the business may depend on the ability to get on top of big data. Gartner, for one, predicts that within the next three years, at least of a third of Fortune 100 organizations will experience an information crisis, “due to their inability to effectively value, govern and trust their enterprise information.”

This ability to “value, govern and trust” goes way beyond the traditional maintenance of data assets that IT has specialized in over the past few decades. As Gartner’s Andrew White put it: “Business leaders need to manage information, rather than just maintain it. When we say ‘manage,’ we mean ‘manage information for business advantage,’ as opposed to just maintaining data and its physical or virtual storage needs. In a digital economy, information is becoming the competitive asset to drive business advantage, and it is the critical connection that links the value chain of organizations.”

For starters, then, it is important that the business have full say over what data needs to be brought in, what data is important for further analysis, and what should be done with data once it gains in maturity. IT, however, needs to take a leadership role in assuring the data meets the organization’s quality standards, and that it is well-vetted so that business decision-makers can be confident in the data they are using.

The bottom line is that big data is a team effort, involving the whole enterprise. IT has a role to play, as does legal, as do the line of business.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, Data Governance | Tagged , , | Leave a comment

Data: The Unsung Hero (or Villain) of every Communications Service Provider

The faceless hero of CSPs: Data

The faceless hero of CSPs: Data

Analyzing current business trends helps illustrate how difficult and complex the Communication Service Provider business environment has become. CSPs face many challenges. Clients expect high quality, affordable content that can move between devices with minimum advertising or privacy concerns. To illustrate this phenomenon, here are a few recent examples:

  • Apple is working with Comcast/NBC Universal on a new converged offering
  • Vodafone purchased the Spanish cable operator, Ono, having to quickly separate the wireless customers from the cable ones and cross-sell existing products
  • Net neutrality has been scuttled in the US and upheld in the EU so now a US CSP can give preferential bandwidth to content providers, generating higher margins
  • Microsoft’s Xbox community collects terabytes of data every day making effective use, storage and disposal based on local data retention regulation a challenge
  • Expensive 4G LTE infrastructure investment by operators such as Reliance is bringing streaming content to tens of millions of new consumers

To quickly capitalize on “new” (often old, but unknown) data sources, there has to be a common understanding of:

  • Where the data is
  • What state it is in
  • What it means
  • What volume and attributes are required to accommodate a one-off project vs. a recurring one

When a multitude of departments request data for analytical projects with their one-off, IT-unsanctioned on-premise or cloud applications, how will you go about it? The average European operator has between 400 and 1,500 (known) applications. Imagine what the unknown count is.

A European operator with 20-30 million subscribers incurs an average of $3 million per month due to unpaid invoices. This often results from incorrect or incomplete contact information. Imagine how much you would have to add for lost productivity efforts, including gathering, re-formatting, enriching, checking and sending  invoices. And this does not even account for late invoice payments or extended incorrect credit terms.

Think about all the wrong long-term conclusions that are being drawn from this wrong data. This single data problem creates indirect cost in excess of three times the initial, direct impact of unpaid invoices.

Want to fix your data and overcome the accelerating cost of change? Involve your marketing, CEM, strategy, finance and sales leaders to help them understand data’s impact on the bottom line.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration, Data Quality, Operational Efficiency | Tagged , , , , | Comments Off

Informatica World 2014: Data at the center of everything

Informatica World 2014Data is transforming our world. We all know this as data experts. But to realize the full transformative potential of information, we each have to push ourselves beyond our comfort zone. We have to think about what this transformation means for us three months from now, three years from now, even three decades from now. This is why I’m so excited about the Informatica World 2014 conference. In particular, the keynotes will be amazing. Some will inspire you. A couple may shock you. Most will arm you. All will enlighten you.

As always, the lineup includes Informatica executives including Sohaib Abbasi (CEO), Ivan Chong (Chief Strategy Officer), Marge Breya (Chief Marketing Officer) and Anil Chakravarthy (Chief Product Officer). They will lay out Informatica’s vision for this new data-centric world, and explain the coming innovations that will take the concept of a data platform to an entirely new level.

And building on the resoundingly positive response to Rick Smolan’s keynote last year on “The Human Face of Big Data”, the Informatica World organizers have put together a stellar array of thinkers designed to push the boundaries of how you think about the convergence of data and technology with humanity.

  1. Will humans and machines merge? Inventor and thinker Ray Kurzweil will lay out his provocative thesis in which nanobots will travel through the blood stream and enter our brains noninvasively, enabling us to put our neocortexes on the cloud where we will access nonbiology extensions to our thinking by the 2030s. We will thereby become a hybrid of biological and nonbiological thinking.
  2. Is data a science or an art? Jer Thorp is a data artist (move aside, data scientists), whose work focuses on adding narrative meaning to huge amounts of data. He will show how cutting edge visualization techniques can be used to tell stories, and make data more human.
  3. How do we use data for good? Drew Conway is an expert in applying computational methods to social and behavioral problems and co-founder of Datakind. He will push us all to think about how can we use and analyze data not merely to increase efficiency and profits, but to serve society and “do good.”

I can’t wait to hear these speakers, and I hope you will join us in Las Vegas May 12-15 to learn a bunch, have fun, and potentially transform how you think about data and humanity.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Informatica Events, Informatica World 2014 | Tagged , | 2 Comments

Would YOU Buy a Ford Pinto Just To Get the Fuzzy Dice?

Today, I am going to take a stab at rationalizing why one could even consider solving a problem with a solution that is well-known to be sub-par. Consider the Ford Pinto: Would you choose this car for your personal, land-based transportation simply because of the new plush dice in the window? For my European readers, replace the Pinto with the infamous Trabant and you get my meaning.  The fact is, both of these vehicles made the list of the “worst cars ever built” due to their mediocre design, environmental hazards or plain personal safety record.

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

Rational people would never choose a vehicle this way. So I always ask myself, “How can IT organizations rationalize buying product X just because product Y is thrown in for free?” Consider the case in which an organization chooses their CRM or BPM system simply because the vendor throws in an MDM or Data Quality Solution for free: Can this be done with a straight face?  You often hear vendors claim that “everything in our house is pre-integrated”, “plug & play” or “we have accelerators for this.” I would hope that IT procurement officers have come to understand that these phrases don’t close a deal in a cloud-based environment. That is even less so in an on-premise construct as it can never achieve this Nirvana unless it is customized based on client requirements.

Anyone can see the logic in getting “2 for the price of 1.” However, as IT procurement organizations seek to save a percentage of money every deal, they can’t lose sight of this key fact:

Standing up software (configuring, customizing, maintaining) and operating it over several years requires CLOSE inspection and scrutiny.

Like a Ford Pinto, Software cannot just be driven off the lot without a care, leaving you only to worry about changing the oil and filters at recommended intervals. Customization, operational risk and maintenance are a significant cost, which all my seasoned padawans will know. If Pinto buyers would have understood the Total Cost of Ownership before they made their purchase, they would have opted for Toyotas instead. Here is the bottom line:

If less than 10% of the overall requirements are solved by the free component
AND (and this is a big AND)
If less than 12% of the overall financial value is provided by the free component
Then it makes ZERO sense select a solution based on freebie add-ons.

When an add-on component is of significantly lower-quality than industry leading solutions, it becomes even more illogical to rely on it simply because it’s “free.” If analysts have affirmed that the leading solutions have stronger capabilities, flexibility and scalability, what does an IT department truly “save” by choosing an inferior “free” add-on?

So just why DO procurement officers gravitate toward “free” add-ons, rather than high quality solutions? As a former procurement manager, I remember the motivations perfectly. Procurement teams are often measured by, and rewarded for, the savings they achieve. Because their motivation is near-term savings, long term quality issues are not the primary decision driver. And, if IT fails to successfully communicate the risks, cost drivers and potential failure rates to Procurement, the motivation to save up-front money will win every time.

Both sellers and buyers need to avoid these dances of self-deception, the “Pre-Integration Tango” and the “Freebie Cha-Cha”.  No matter how much you loved driving that Pinto or Trabant off the dealer lot, your opinion changed after you drove it for 50,000 miles.

I’ve been in procurement. I’ve built, sold and implemented “accelerators” and “blueprints.” In my opinion, 2-for-1 is usually a bad idea in software procurement. The best software is designed to make 1+1=3. I would love to hear from you if you agree with my above “10% requirements/12% value” rule-of-thumb.  If not, let me know what your decision logic would be.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Enterprise Data Management | Tagged , , | 10 Comments

Lessons From Kindergarten: The ABC’s of Data

People are obsessed with data. Data captured from our smartphones. Internet data showing how we shop and search — and what marketers do with that data. Big Data, which I loosely define as people throwing every conceivable data point into a giant Hadoop cluster with the hope of figuring out what it all means.

Too bad all that attention stems from fear, uncertainty and doubt about the data that defines us. I blame the technology industry, which — in the immortal words of Cool Hand Luke has had a “failure to communicate.” For decades we’ve talked the language of IT and left it up to our direct customers to explain the proper care-and-feeding of data to their business users. Small wonder it’s way too hard for regular people to understand what we, as an industry, are doing. After all, how we can expect others to explain the do’s and don’ts of data management when we haven’t clearly explained it ourselves?

I say we need to start talking about the ABC’s of handling data in a way that’s easy for anyone to understand. I’m convinced we can because — if you think about it — everything you learned about data you learned in kindergarten: It has to be clean, safe and connected. Here’s what I mean:

Clean

Data cleanliness has always been important, but assumes real urgency with the move toward Big Data. I blame Hadoop, the underlying technology that makes Big Data possible. On the plus side, Hadoop gives companies a cost-effective way to store, process and analyze petabytes of nearly every imaginable data type. And that’s the problem as companies go through the enormous time suck of cataloging and organizing vast stores of data. Put bluntly, big data can be a swamp.

The question is, how to make it potable. This isn’t always easy, but it’s always, always necessary. It begins, naturally, by ensuring the data is accurate, de-deduped and complete.

Connected

Now comes the truly difficult part: Knowing where that data originated, where it’s been, how it’s related to other data and its lineage. That data provenance is absolutely vital in our hyper-connected world where one company’s data interacts with data from suppliers, partners, and customers. Someone else’s dirty data, regardless of origin, can ruin reputations and drive down sales faster than you can say “Target breach.” In fact, we now know that hackers entered Target’s point-of-sales terminals through a supplier’s project management and electronic billing system. We won’t know for a while the full extent of the damage. We do know the hack affected one-third of the entire U.S. population. Which brings us to:

Safe

Obviously, being safe means keeping data out of the hands of criminals. But it doesn’t stop there. That’s because today’s technologies make it oh so easy to misuse the data we have at our disposal. If we’re really determined to keep data safe, we have to think long and hard about responsibility and governance. We have to constantly question the data we use, and how we use it. Questions like:

  • How much of our data should be accessible, and by whom?
  • Do we really need to include personal information, like social security numbers or medical data, in our Hadoop clusters?
  • When do we go the extra step of making that data anonymous?

And as I think about it, I realize that everything we learned in kindergarten boils down to down to the ethics of data: How, for example, do we know if we’re using data for good or for evil?

That question is especially relevant for marketers, who have a tendency to use data to scare people, for crass commercialism, or to violate our privacy just because technology makes it possible. Use data ethically, and we can help change the use.

In fact, I believe that the ethics of data is such an important topic that I’ve decided to make it the title of my new blog.

Stay tuned for more musings on The Ethics of Data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance | Tagged , | 2 Comments

Are you Ready for the Massive Wave of Data?

Leo Eweani makes the case that the data tsunami is coming.  “Businesses are scrambling to respond and spending accordingly. Demand for data analysts is up by 92%; 25% of IT budgets are spent on the data integration projects required to access the value locked up in this data “ore” – it certainly seems that enterprise is doing The Right Thing – but is it?”

Data is exploding within most enterprises.  However, most enterprises have no clue how to manage this data effectively.  While you would think that an investment in data integration would be an area of focus, many enterprises don’t have a great track record in making data integration work.  “Scratch the surface, and it emerges that 83% of IT staff expect there to be no ROI at all on data integration projects and that they are notorious for being late, over-budget and incredibly risky.”

Are you Ready for the massive Wave of Data

The core message from me is that enterprises need to ‘up their game’ when it comes to data integration.  This recommendation is based upon the amount of data growth we’ve already experienced, and will experience in the near future.  Indeed, a “data tsunami” is on the horizon, and most enterprises are ill prepared for it.

So, how do you get prepared?   While many would say it’s all about buying anything and everything, when it comes to big data technology, the best approach is to splurge on planning.  This means defining exactly what data assets are in place now, and will be in place in the future, and how they should or will be leveraged.

To face the forthcoming wave of data, certain planning aspects and questions about data integration rise to the top:

Performance, including data latency.  Or, how quickly does the data need to flow from point or points A to point or points B?  As the volume of data quickly rises, the data integration engines have got to keep up.

Data security and governance.  Or, how will the data be protected both at-rest and in-flight, and how will the data be managed in terms of controls on use and change?

Abstraction, and removing data complexity.  Or, how will the enterprise remap and re-purpose key enterprise data that may not currently exist in a well-defined and functional structure?

Integration with cloud-based data.  Or, how will the enterprise link existing enterprise data assets with those that exist on remote cloud platforms?

While this may seem like a complex and risky process, think through the problems, leverage the right technology, and you can remove the risk and complexity.  The enterprises that seem to fail at data integration do not follow that advice.

I suspect the explosion of data to be the biggest challenge enterprise IT will face in many years.  While a few will take advantage of their data, most will struggle, at least initially.  Which route will you take?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Governance, Data Integration | Tagged , , , | Leave a comment

Data is the Key to Value-based Healthcare

The transition to value-based care is well underway. From healthcare delivery organizations to clinicians, payers, and patients, everyone feels the impact.  Each has a role to play. Moving to a value-driven model demands agility from people, processes, and technology. Organizations that succeed in this transformation will be those in which:

  • Collaboration is commonplace
  • Clinicians and business leaders wear new hats
  • Data is recognized as an enterprise asset

The ability to leverage data will differentiate the leaders from the followers. Successful healthcare organizations will:

1)      Establish analytics as a core competency
2)      Rely on data to deliver best practice care
3)      Engage patients and collaborate across the ecosystem to foster strong, actionable relationships

Trustworthy data is required to power the analytics that reveal the right answers, to define best practice guidelines and to identify and understand relationships across the ecosystem. In order to advance, data integration must also be agile. The right answers do not live in a single application. Instead, the right answers are revealed by integrating data from across the entire ecosystem. For example, in order to deliver personalized medicine, you must analyze an integrated view of data from numerous sources. These sources could include multiple EMRs, genomic data, data marts, reference data and billing data.

A recent PWC survey showed that 62% of executives believe data integration will become a competitive advantage.  However, a July 2013 Information Week survey reported that 40% of healthcare executives gave their organization only a grade D or F on preparedness to manage the data deluge.

value-based healthcare

What grade would you give your organization?

You can improve your organization’s grade, but it will require collaboration between business and IT.  If you are in IT, you’ll need to collaborate with business users who understand the data. You must empower them with self-service tools for improving data quality and connecting data.  If you are a business leader, you need to understand and take an active role with the data.

To take the next step, download our new eBook, “Potential Unlocked: Transforming healthcare by putting information to work.”  In it, you’ll learn:

  1. How to put your information to work
  2. New ways to govern your data
  3. What other healthcare organizations are doing
  4. How to overcome common barriers

So go ahead, download it now and let me know what you think. I look forward to hearing your questions and comments….oh, and your grade!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Warehousing, Healthcare, Master Data Management | Tagged , , , | Leave a comment

If you Want Business to “Own” the Data, You Need to Build An Architecture For the Business

If you build an IT Architecture, it will be a constant up-hill battle to get business users and executives engaged and take ownership of data governance and data quality. In short you will struggle to maximize the information potential in your enterprise. But if you develop and Enterprise Architecture that starts with a business and operational view, the dynamics change dramatically. To make this point, let’s take a look at a case study from Cisco. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Integration, Enterprise Data Management, Governance, Risk and Compliance, Integration Competency Centers | Tagged , , , , | Leave a comment