As Bay Area commutes go, I consider mine to be on the long side. At roughly 60 miles each way, I can expect to be in my car for a while depending on what time of day I make the journey to or from the office. As is often the case, I take the time I have in my car to think about work and the various items that consume my inbox on any given day.
During a drive home late last week, I was pondering some thoughts I had on how to articulate the ROI of good data quality. As commutes tend to go sometimes, this day was particularly frustrating so I had some extra time on my hands to think things through. As I sat at a standstill for what seemed like an eternity, it dawned on me that the carpool lane was, for the most part, completely empty. After briefly contemplating the utility of the very high fine associated with jumping into the empty carpool lane, I realized that the flow of traffic at that particular moment was a good example of how data often flows throughout an organization.
Thinking about the concentration of vehicles on the road that day, I would estimate that roughly only 30% of them were utilizing the most efficient route that was the car pool lane. That means the other 70% of us suffered our way through the treacherous road conditions on the way to our destination.
Consider for a minute how this might impact the financial performance of any given organization. Exploring my analogy a little further, let’s assume that for those 70% of vehicles that are commuting in an inefficient way, they are traveling at an average rate of pace of 25 miles per hour. On the other hand, those that are in the carpool lane are zipping along at a much more favorable average pace of 60 miles per hour. Using my commute as the distance, it would take someone like me (in the “slow lane”, recall) approximately 2 and 1/3 hours to get from point A to point B. The “fast lane” people, on the other hand would take only an hour to cover the same distance, a significant improvement over us slow movers. Sadly, though, only a small fraction of total commuters are realizing the positive side of this.
Now let’s think of that same scenario in the context of bad quality data. If we use the same numbers, we’ll assume that 30% of the data in an organization is trusted and authoritative. This data requires, for all intents and purposes, little to no intervention to serve it up the business and is flowing smoothly through key business processes. On the other hand, the remaining 70% cannot be trusted, is stalled and hung up in the organization, resulting in lost opportunity to the business. If the data is specific to customers, say, then only 30% of the time your customer outreach efforts are going to be on target while the remaining 70% will be a complete miss. If you’re spending $100 on an up-sell or cross-sell campaign then only $30 of that is yielding any level of meaningful result. It’s easy to see that for a significantly large customer base there’s a lot to be gained by making sure the quality of data meets your expectations.
Strangely enough, more times that not this same phenomenon seems to resonate in the state of data quality in a lot of companies I talk to. Organizations of all shapes and sizes have data quality issues on a similar scale. In many cases they have no idea that these issues could be hindering their ability to drive efficiency or stay competitive. In other cases there’s a general acceptance for poor data quality and the organization forges ahead with a “this is how we do things” mentality.
In order to help break down this barrier and change conventional thinking, it’s often helpful to further explore the financial impact poor quality data can have on an organization. To this end, we’ve recently partnered with Knowledge Integrity, Inc. on a document aimed at helping you understand and build a business case for solving data quality issues that have financial impact. Armed with this level of information, we trust you’ll be able to accelerate your data quality efforts and live life in the fast lane.