Tag Archives: Data Governance

Utility Executives: Don’t Waste Millions in Operating Costs Due to Bad Asset Data

“Trying to improve the quality of asset data when you don’t have a solid data management infrastructure in place is like trying to save a sinking boat with a bailing bucket,” explained Dean Balog, a senior principal consultant at Noah Consulting, in this webinar, Attention Utility Executives: Don’t Waste Millions in Operating Costs Due to Bad Asset Data

Dean Balog from Noah Consulting explains how to improve the quality of mission-critical asset data for asset management / equipment maintenance and regulatory reporting, such as rate case submissions.

Dean has 15 years of experience in information management in the utilities industry. In this interview, Dean and I discuss the top issues facing utility executives and how to improve the quality of mission-critical asset data for asset management / equipment maintenance and regulatory reporting, such as rate case submissions.

Q: Dean, what are the top issues facing utility executives?
A: The first issue is asset management / equipment maintenance. Knowing where to invest precious dollars is critical. Utility executives are engaged in a constant tug of war between two competing priorities: replacing aging infrastructure and regular maintenance.

Q. How are utility executives determining that balance?
A. You need to start with facts – the real costs and reliability information for each asset in your infrastructure. Without it, you are guessing. Basically, it is a data problem. Utility executives should ask themselves these questions:

  • Do we have the ability to capture and combine cost and reliability information from multiple sources?  Is it granular enough to be useful?
  • Do we know the maintenance costs of eight-year-old breakers versus three-year-old breakers?
  • Do our meters start failing around the average lifespan? For this example, let us say that is five years. Rather than falling uniformly into that average, do 30% of our meters fail in the first year and the rest last eight years? Those three extra years of life can certainly help out the bottom line.

Knowing your data makes all the difference. The right capital investment strategy requires combining performance, reliability, and cost data.

Quotations about data challenges faced by utility companies

Quotations about data challenges faced by utility companies

Q. Why is it difficult for utility executives to understand the real costs and reliability of assets?
A. I know this does not come as a shock, but most companies do not trust their data. Asset data is often inaccurate, inconsistent, and disconnected. Even the most basic data may not be available. For example, manufacture dates on breakers should be filled in, but they are not. If less than 50% of your breakers have manufacture dates, how can you build a preventative maintenance program? You do not even know what to address first!

A traditional approach to solving this data problem is to do a big data cleanup. You clean the data, and then before you know it, errors creep back in, and the trust in the data you have worked so hard to establish is lost.

I like to illustrate the pain of this issue by using the sinking boat analogy. Data cleanup is like bailing out the water collecting in the bottom of the boat. You think you are solving the problem but more water still seeps into the boat. You cannot stop bailing or you will sink. What you need to do is fix the leaks, and then bail out the boat. But, if you do not lift up your head from bailing long enough to see the leaks and make the right investments, you are fighting a losing battle.

Q. What can utility executives do to improve the quality of asset data?
A. First of all, you need to develop a data governance framework. Going back to the analogy, a data governance framework gives you the structure to find the leaks, fix the leaks, and monitor how much of the water has been bailed out. If the water level is still rising, you have not fixed all the leaks. But having a data governance framework is not the be-all and end-all.

You also need to appoint data stewards to be accountable for establishing and maintaining high quality asset data. The job of a data steward would be easy if there was only one system where all asset data resided. But the fact of the matter is that asset data is fragmented – scattered across multiple systems. Data stewards have a huge responsibility and they need to be supported by a solid data management infrastructure to ease the burden of managing business-critical asset information.

Webinar, Attention Utility Executives Bad Asset Data

If you are responsible for asset management / equipment maintenance or regulatory reporting, particularly rate case submissions, check out this webinar.

Master Data Management (MDM) ensures business-critical asset data is consistent everywhere by pulling together data that is scattered across multiple applications. It manages and masters it in a central location on a continuous basis and shares it with any applications that need that data. MDM provides a user interface and workflow for data stewards to manage the tangled web of names and IDs these assets are known by across systems. It also gives utilities a disciplined approach to manage important relationships between the asset data, such as an asset’s performance reliability and its cost.

Q. Any other pressing issues facing utilities?
A. Yes. Another big issue is tightening regulations that consume investment dollars and become key inputs into rate case submissions and defenses. One of the complicating factors is the number of regulations is not only increasing, but the regulators are also requiring faster implementation times than ever before. So, utilities cannot just do what they have done in the past: throw more people at the problem in the short-term and resolve to fix it later by automating it “when things slow down.” That day never comes.

Q. How can utilities deal with these regulatory pressures?
A. Utilities need a new approach to deal with regulations. Start with the assumption that all data is fair game for regulators. All data must be accessible. You need to be able to report on it, not only to comply with regulations, but for competitive advantage. This requires the high quality asset information we talked about earlier, and an analytical application to:

  • Perform what-if analyses for your asset investment program;
  • Develop regulatory compliance or environmental reports quickly, because the hard work (integrating the data within your MDM program) has already been done; and
  • Get access to granular, observed reliability and cost information using your own utility’s data – not benchmark data that is already a couple of years old and highly summarized.

Q. What is your advice for utility company executives?
A. If you are the one responsible for signing off on regulatory reports and you do not look good in an orange jumpsuit, you need to invest in a plan that includes people, process, and technology to support regulatory reporting and asset management / equipment maintenance.

  • People – Data stewards have clear accountability for the quality of asset data.
  • Process – Data governance is your game plan.
  • Technology – A solid data management infrastructure consisting of data integration, data quality, and master data management is your means.

If you are responsible for asset management / equipment maintenance or regulatory reporting, particularly rate case submissions, check out this webinar, Attention Utility Executives: Don’t Waste Millions in Operating Costs Due to Bad Asset Data

 Our panel of utility data experts:

  • Reveal the five toughest business challenges facing utility industry executives;
  • Explain how bad asset data could be costing you millions of dollars in operating costs;
  • Share three best practices for optimizing asset management / equipment maintenance and regulatory reporting with accurate, consistent, and connected asset information; and
  • Show you how to implement these best practices with a demonstration.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Enterprise Data Management, Master Data Management, Partners, Utilities & Energy, Vertical | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Talk Amongst Yourselves: Why Twitter Needs #DataChat

Listeng to #DataChatWithin Government organizations, technologists are up against a wall of sound. In one ear, they hear consumers cry for faster, better service.

In the other, they hear administrative talk of smaller budgets and scarcer resources.

As stringent requirements for both transparency and accountability grow, this paradox of pressure increases.

Sometimes, the best way to cope is to TALK to somebody.

What if you could ask other data technologists candid questions like:

  • Do you think government regulation helps or hurts the sharing of data?
  • Do you think government regulators balance the privacy needs of the public with commercial needs?
  • What are the implications of big data government regulation, especially for users?
  • How can businesses expedite the government adoption of the cloud?
  • How can businesses aid in the government overcoming the security risks associated with the cloud?
  • How should the policy frameworks for handling big data differ between the government and the private sector?

What if you could tell someone who understood? What if they had sweet suggestions, terrific tips, stellar strategies for success? We think you can. We think they will.

That’s why Twitter needs a #DataChat.

Twitter Needs #DataChat

Third Thursdays, 3:00 PM EST

What on earth is a #DataChat?
Good question. It’s a Twitter Chat – A public dialog, at a set time, on a set topic. It’s something like a crowd-sourced discussion. Any Twitter user can participate simply by including the applicable hashtag in each tweet. Our hashtag is #DataChat. We’ll connect on Twitter, on the third Thursday of each month to share struggles, victories and advice about data governance. We’re going to begin this week, Thursday April 17, at 3:00 PM Eastern Time. For our first chat, we are going to discuss topics that relate to data technologies in government organizations.

What don’t you join us? Tell us about it. Mark your calendar. Bring a friend.

Because, sometimes, you just need someone to talk to.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance, Governance, Risk and Compliance, Public Sector | Tagged , , , , | Leave a comment

Should Legal (Or Anyone Else Outside of IT) Be in Charge of Big Data?

shutterstock_145160692A few years back, there was a movement in some businesses to establish “data stewards” – individuals who would sit at the hearts of the enterprise and make it their job to assure that data being consumed by the organization is of the highest possible quality, is secure, is contextually relevant, and capable of interoperating across any applications that need to consume it. While the data steward concept came along when everything was relational and structured, these individuals are now earning their pay when it comes to managing the big data boom.

The rise of big data is creating more than simple headaches for data stewards, it is creating turf wars across enterprises.  As pointed out in a recent article in The Wall Street Journal, there isn’t yet a lot of clarity as to who owns and cares for such data. Is it IT?  Is it lines of business?  Is it legal? There are arguments that can be made for all jurisdictions.

In organizations these days, for example, marketing executives are generating, storing and analyzing large volumes of their own data within content management systems and social media analysis solutions. Many marketing departments even have their own IT budgets. Along with marketing, of course, everyone else within enterprises is seeking to pursue data analytics to better run their operations as well as foresee trends.

Typically, data has been under the domain of the CIO,  the person who oversaw the collection, management and storage of information. In the Wall Street Journal article, however, it’s suggested that legal departments may be the best caretakers of big data, since big data poses a “liability exposure,” and legal departments are “better positioned to understand how to use big data without violating vendor contracts and joint-venture agreements, as well as keeping trade secrets.”

However, legal being legal, it’s likely that insightful data may end up getting locked away, never to see the light of day. Others may argue IT department needs to retain control, but there again, IT isn’t trained to recognize information that may set the business on a new course.

Focusing on big data ownership isn’t just an academic exercise. The future of the business may depend on the ability to get on top of big data. Gartner, for one, predicts that within the next three years, at least of a third of Fortune 100 organizations will experience an information crisis, “due to their inability to effectively value, govern and trust their enterprise information.”

This ability to “value, govern and trust” goes way beyond the traditional maintenance of data assets that IT has specialized in over the past few decades. As Gartner’s Andrew White put it: “Business leaders need to manage information, rather than just maintain it. When we say ‘manage,’ we mean ‘manage information for business advantage,’ as opposed to just maintaining data and its physical or virtual storage needs. In a digital economy, information is becoming the competitive asset to drive business advantage, and it is the critical connection that links the value chain of organizations.”

For starters, then, it is important that the business have full say over what data needs to be brought in, what data is important for further analysis, and what should be done with data once it gains in maturity. IT, however, needs to take a leadership role in assuring the data meets the organization’s quality standards, and that it is well-vetted so that business decision-makers can be confident in the data they are using.

The bottom line is that big data is a team effort, involving the whole enterprise. IT has a role to play, as does legal, as do the line of business.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, Data Governance | Tagged , , | Leave a comment

Why Healthcare Data Professionals belong at Informatica World 2014

This year, over one dozen healthcare leaders will share their knowledge on data driven insights at Informatica World 2014. These will be included in six tracks and over 100 breakout sessions during the conference. We are only five weeks away and I am excited that the healthcare path has grown 220% from 2013!

Informatica-World-Healthcare

Join us for these healthcare sessions:

  • Moving From Vision to Reality at UPMC : Structuring a Data Integration and Analytics Program: University of Pittsburgh Medical Center (UPMC) partnered with Informatica IPS to establish enterprise analytics as a core organizational competency through an Integration Competency Center engagement. Join IPS and UPMC to learn more.
  • HIPAA Validation for Eligibility and Claims Status in Real Time: Healthcare reform requires healthcare payers to exchange and process HIPAA messages in less time with greater accuracy. Learn how HealthNet tackled this challenge.
  • Application Retirement for Healthcare ROI : Dallas Children’s Hospital needed to retire outdated operating systems, hardware, and applications while retaining access to their legacy data for compliance purposes. Learn why application retirement is critical to the healthcare industry, how Dallas Children’s selected which applications to retire and the healthcare specific functionality that Informatica is delivering.
  • UPMC’s story of implementing a Multi-Domain MDM healthcare solution in support of Data Governance : This presentation will unfold the UPMC story of implementing a Multi-Domain MDM healthcare solution as part of an overall enterprise analytics / data warehousing effort.  MDM is a vital part of the overall architecture needed to support UPMC’s efforts to improve the quality of patient care and help create methods for personalized medicine. Today, the leading MDM solution developer will discuss how the team put together the roadmap, worked with domain specific workgroups, created the trust matrix and share his lessons learned. He will also share what they have planned for their consolidated and trusted Patient, Provider and Facility master data in this changing healthcare industry. This will also explain how the MDM program fits into the ICC (Integration Competency Center) currently implemented at UPMC.
  • Enterprise Codeset Repositories for Healthcare: Controlling the Chaos: Learn the benefit of a centralized storage point to govern and manage codes (ICD-9/10, CPT, HCPCS, DRG, SNOMED, Revenue, TOS, POS, Service Category, etc.), mappings and artifacts that reference codes.
  • Christus Health Roadmap to Data Driven Healthcare : To organize information and effectively deliver services in a hypercompetitive market, healthcare organizations must deliver data in an accurate, timely, efficient way while ensuring its clarity. Learn how CHRISTUS Health is developing and pursuing its vision for data management, including lessons adopted from other industries and the business case used to fund data management as a strategic initiative.
  • Business Value of Data Quality : This customer panel will address why data quality is a business imperative which significantly affects business success.
  • MD Anderson – Foster Business and IT Collaboration to Reveal Data Insights with Informatica: Is your integration team intimidated by the new Informatica 9.6 tools? Do your analysts and business users require faster access to data and answers about where data comes from. If so, this session is a must attend.
  • The Many Faces of the Healthcare Customer : In the healthcare industry, the customer paying for services (individuals, insurers, employers, the government) is not necessarily the decision-influencer (physicians) or even the patient — and the provider comes in just as many varieties. Learn how, Quest, the world’s leading provider of diagnostic information leverages master data management to resolve the chaos of serving 130M+ patients, 1200+ payers, and almost half of all US physicians and hospitals.
  • Lessons in Healthcare Enterprise Information Management from St. Joseph Health and Sutter Health St. Joseph : Health created a business case for enterprise information management, then built a future-proofed strategy and architecture to unlock, share, and use data. Sutter Health engaged the business, established a governance structure, and freed data from silos for better organizational performance and efficiency. Come hear these leading health systems share their best practices and lessons learned in making data-driven care a reality.
  • Navinet, Inc and Informatica – Delivering Network Intelligence, The Value to the Payer, Provider and Patient: Today, healthcare payers and providers must share information in unprecedented ways to reduce redundancy, cut costs, coordinate care, and drive positive outcomes. Learn how NaviNet’s vision of a “smart” communications network combines Big Data and network intelligence to share proactive real-time information between insurers and providers.
  • Providence Health Services takes a progressive approach to automating ETL development and documentation: A newly organized team of BI Generalists, most of whom have no ETL experience and even fewer with Informatica skills, were tasked with Informatica development when Providence migrated from Microsoft SSIS to Informatica. Learn how the team relied on Informatica to alleviate the burden of low value tasks.
  • Using IDE for Data On-boarding Framework at HMS : HMS’s core business is to onboard large amounts of external data that arrive in different formats. HMS developed a framework using IDE to standardize the on-boarding process. This tool can be used by non-IT analysts and provides standard profiling reports and reusable mapping “templates” which has improved the hand-off to IT and significantly reduced misinterpretations and errors.

Additionally, this year’s attendees are invited to:

  1. Over 100 breakout sessions: Customers from other industries, including financial services, insurance, retail, manufacturing, oil and gas will share their data driven stories.
  2. Healthcare networking reception on Wednesday, May 14th: Join your healthcare peers and Informatica’s healthcare team on Wednesday from 6-7:30pm in the Vesper bar of the Cosmopolitan Resort for a private Healthcare networking reception. Come and hear firsthand how others are achieving a competitive advantage by maximizing return on data while enjoying hors d’oeuvres and cocktails.
  3. Data Driven Healthcare Roundtable Breakfast on Wednesday, May 14th. Customer led roundtable discussion.
  4. Personal meetings: Since most of the Informatica team will be in attendance, this is a great opportunity to meet face to face with Informatica’s product, services and solution teams.
  5. Informatica Pavilion and Partner Expo: Interact with the latest Informatica and our partners provide.
  6. An expanded “Hands-on-Lab”: Learn from real-life case studies and talk to experts about your unique environment.

The Healthcare industry is facing extraordinary changes and uncertainty — both from a business and a technology perspective. Join us to learn about key drivers for change and innovative uses of data technology solutions to discover sources for operational and process improvement. There is still time to Register now!

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, Data Integration, Healthcare, Informatica World 2014, Master Data Management | Tagged , , , | 2 Comments

Is your social media investment hampered by your “data poverty”?

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Recently, I talked with a company that had allocated millions of dollars for paid social media promotion. Their hope was that a massive investment in Twitter and Facebook campaigns would lead to “more eyeballs” for their online gambling sites. Although they had internal social media expertise, they lacked a comprehensive partnership with IT. In addition, they lacked a properly funded policy vision. As a result, when asked how much of their socially-driven traffic resulted in actual sales, their answer was a resounding “No Idea.” I attribute this to “data poverty.”

There is a key reason that they were unable to quantify the ROI of their promotion: Their business model is, by design, “data poor.”  Although a great deal of customer data was available to them, they didn’t elect to use it. They could have used available data to identify “known players” as well as individuals with “playing potential.” There was no law prohibiting them from acquiring this data. However, they were uncomfortable obtaining a higher degree of attribution beyond name, address, e-mail and age.  They feared that customers would view them as a commercial counterpart to the NSA. As a result, key data elements like net worth, life-time-value, credit risk, location, marital status, employment status, number of friends/followers and property value we not considered when targeting potential users on social media. So, though the Social Media team considered this granular targeting to be a “dream-come-true,” others within the organization considered it to be too “1984.”

In addition to a hesitation to leverage available data, they were also limited by their dependence on a 3rd party IT provider. This lack of self-sufficiency created data quality issues, which limited their productivity. Ultimately, this dependency prevented them from capitalizing on new market opportunities in a timely way.

It should have been possible for them to craft a multi-channel approach. They ought to have been able to serve up promoted Tweets, banner ads and mobile application ads. They should have been able to track the click-through, IP and timestamp information from each one. They should have been able to make a BCR for redeeming a promotional offer at a retail location.

Strategic channel allocation would certainly have triggered additional sales. In fact, when we applied click-through, CAC and conversion benchmarks to their available transactional information, we modeled over $8 million in additional sales and $3 million in customer acquisition cost savings. In addition to the financial benefits, strategic channel allocation would have generated more data (and resulting insights) about their prospects and customers than they had when they began.

But, because they were hesitant to use all the data available to them, they failed to capitalize on their opportunities. Don’t let this happen to you. Make a strategic policy change to become a data-driven company.

Beyond the revenue gains of targeted social marketing, there are other reasons to become a data-driven company. Clean data can help you correctly identify ideal channel partners. This company failed to use sufficient data to properly select and retain their partners.  Hundreds of channel partners were removed without proper, data-driven confirmation. Reasons for this removal included things like “death of owner,”“fire,” and “unknown”.  To ensure more thorough vetting, the company could have used data points like the owner’s age, past business endeavors and legal proceedings. They could also have have studied location-fenced attributes like footfall, click-throughs and sales cannibalization risk. In fact, when we modeled the potential overall annual savings, across all business scenarios, for becoming a data driven company, the potential savings amount approached $40 million dollars.

Would a $40 million dollar savings inspire you to invest in your data? Would that amount be enough to motivate you to acquire, standardize, deduplicate, link, hierarchically structure and enrich YOUR data? It’s a no brainer. But it requires a policy shift to make your data work for you. Without this, it’s all just “potential”.

Do you have stories about companies that recently switched from traditional operations to smarter, data-driven operations? If so, I’d love to hear from you.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Master Data Management | Tagged , , | 2 Comments

Informatica World 2014: Data at the center of everything

Informatica World 2014Data is transforming our world. We all know this as data experts. But to realize the full transformative potential of information, we each have to push ourselves beyond our comfort zone. We have to think about what this transformation means for us three months from now, three years from now, even three decades from now. This is why I’m so excited about the Informatica World 2014 conference. In particular, the keynotes will be amazing. Some will inspire you. A couple may shock you. Most will arm you. All will enlighten you.

As always, the lineup includes Informatica executives including Sohaib Abbasi (CEO), Ivan Chong (Chief Strategy Officer), Marge Breya (Chief Marketing Officer) and Anil Chakravarthy (Chief Product Officer). They will lay out Informatica’s vision for this new data-centric world, and explain the coming innovations that will take the concept of a data platform to an entirely new level.

And building on the resoundingly positive response to Rick Smolan’s keynote last year on “The Human Face of Big Data”, the Informatica World organizers have put together a stellar array of thinkers designed to push the boundaries of how you think about the convergence of data and technology with humanity.

  1. Will humans and machines merge? Inventor and thinker Ray Kurzweil will lay out his provocative thesis in which nanobots will travel through the blood stream and enter our brains noninvasively, enabling us to put our neocortexes on the cloud where we will access nonbiology extensions to our thinking by the 2030s. We will thereby become a hybrid of biological and nonbiological thinking.
  2. Is data a science or an art? Jer Thorp is a data artist (move aside, data scientists), whose work focuses on adding narrative meaning to huge amounts of data. He will show how cutting edge visualization techniques can be used to tell stories, and make data more human.
  3. How do we use data for good? Drew Conway is an expert in applying computational methods to social and behavioral problems and co-founder of Datakind. He will push us all to think about how can we use and analyze data not merely to increase efficiency and profits, but to serve society and “do good.”

I can’t wait to hear these speakers, and I hope you will join us in Las Vegas May 12-15 to learn a bunch, have fun, and potentially transform how you think about data and humanity.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Informatica Events, Informatica World 2014 | Tagged , | 2 Comments

Would YOU Buy a Ford Pinto Just To Get the Fuzzy Dice?

Today, I am going to take a stab at rationalizing why one could even consider solving a problem with a solution that is well-known to be sub-par. Consider the Ford Pinto: Would you choose this car for your personal, land-based transportation simply because of the new plush dice in the window? For my European readers, replace the Pinto with the infamous Trabant and you get my meaning.  The fact is, both of these vehicles made the list of the “worst cars ever built” due to their mediocre design, environmental hazards or plain personal safety record.

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

What is a Pinto-like buying decision in information technology procurement? (source: msn autos)

Rational people would never choose a vehicle this way. So I always ask myself, “How can IT organizations rationalize buying product X just because product Y is thrown in for free?” Consider the case in which an organization chooses their CRM or BPM system simply because the vendor throws in an MDM or Data Quality Solution for free: Can this be done with a straight face?  You often hear vendors claim that “everything in our house is pre-integrated”, “plug & play” or “we have accelerators for this.” I would hope that IT procurement officers have come to understand that these phrases don’t close a deal in a cloud-based environment. That is even less so in an on-premise construct as it can never achieve this Nirvana unless it is customized based on client requirements.

Anyone can see the logic in getting “2 for the price of 1.” However, as IT procurement organizations seek to save a percentage of money every deal, they can’t lose sight of this key fact:

Standing up software (configuring, customizing, maintaining) and operating it over several years requires CLOSE inspection and scrutiny.

Like a Ford Pinto, Software cannot just be driven off the lot without a care, leaving you only to worry about changing the oil and filters at recommended intervals. Customization, operational risk and maintenance are a significant cost, which all my seasoned padawans will know. If Pinto buyers would have understood the Total Cost of Ownership before they made their purchase, they would have opted for Toyotas instead. Here is the bottom line:

If less than 10% of the overall requirements are solved by the free component
AND (and this is a big AND)
If less than 12% of the overall financial value is provided by the free component
Then it makes ZERO sense select a solution based on freebie add-ons.

When an add-on component is of significantly lower-quality than industry leading solutions, it becomes even more illogical to rely on it simply because it’s “free.” If analysts have affirmed that the leading solutions have stronger capabilities, flexibility and scalability, what does an IT department truly “save” by choosing an inferior “free” add-on?

So just why DO procurement officers gravitate toward “free” add-ons, rather than high quality solutions? As a former procurement manager, I remember the motivations perfectly. Procurement teams are often measured by, and rewarded for, the savings they achieve. Because their motivation is near-term savings, long term quality issues are not the primary decision driver. And, if IT fails to successfully communicate the risks, cost drivers and potential failure rates to Procurement, the motivation to save up-front money will win every time.

Both sellers and buyers need to avoid these dances of self-deception, the “Pre-Integration Tango” and the “Freebie Cha-Cha”.  No matter how much you loved driving that Pinto or Trabant off the dealer lot, your opinion changed after you drove it for 50,000 miles.

I’ve been in procurement. I’ve built, sold and implemented “accelerators” and “blueprints.” In my opinion, 2-for-1 is usually a bad idea in software procurement. The best software is designed to make 1+1=3. I would love to hear from you if you agree with my above “10% requirements/12% value” rule-of-thumb.  If not, let me know what your decision logic would be.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Enterprise Data Management | Tagged , , | 10 Comments

Are you Ready for the Massive Wave of Data?

Leo Eweani makes the case that the data tsunami is coming.  “Businesses are scrambling to respond and spending accordingly. Demand for data analysts is up by 92%; 25% of IT budgets are spent on the data integration projects required to access the value locked up in this data “ore” – it certainly seems that enterprise is doing The Right Thing – but is it?”

Data is exploding within most enterprises.  However, most enterprises have no clue how to manage this data effectively.  While you would think that an investment in data integration would be an area of focus, many enterprises don’t have a great track record in making data integration work.  “Scratch the surface, and it emerges that 83% of IT staff expect there to be no ROI at all on data integration projects and that they are notorious for being late, over-budget and incredibly risky.”

Are you Ready for the massive Wave of Data

The core message from me is that enterprises need to ‘up their game’ when it comes to data integration.  This recommendation is based upon the amount of data growth we’ve already experienced, and will experience in the near future.  Indeed, a “data tsunami” is on the horizon, and most enterprises are ill prepared for it.

So, how do you get prepared?   While many would say it’s all about buying anything and everything, when it comes to big data technology, the best approach is to splurge on planning.  This means defining exactly what data assets are in place now, and will be in place in the future, and how they should or will be leveraged.

To face the forthcoming wave of data, certain planning aspects and questions about data integration rise to the top:

Performance, including data latency.  Or, how quickly does the data need to flow from point or points A to point or points B?  As the volume of data quickly rises, the data integration engines have got to keep up.

Data security and governance.  Or, how will the data be protected both at-rest and in-flight, and how will the data be managed in terms of controls on use and change?

Abstraction, and removing data complexity.  Or, how will the enterprise remap and re-purpose key enterprise data that may not currently exist in a well-defined and functional structure?

Integration with cloud-based data.  Or, how will the enterprise link existing enterprise data assets with those that exist on remote cloud platforms?

While this may seem like a complex and risky process, think through the problems, leverage the right technology, and you can remove the risk and complexity.  The enterprises that seem to fail at data integration do not follow that advice.

I suspect the explosion of data to be the biggest challenge enterprise IT will face in many years.  While a few will take advantage of their data, most will struggle, at least initially.  Which route will you take?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Governance, Data Integration | Tagged , , , | Leave a comment

INFAgraphic: Transforming Healthcare by Putting Information to Work

ebookHealthInfagraphicforblog

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Tagged , , , | Leave a comment