I’m sure it’s no surprise to anyone, but there is much talk in the industry today regarding “data” and the management or control of it. To that end, commonly used terms such as Master Data Management (MDM) and Data Governance are sometimes used interchangeably and other times have wildly different definitions and applications. Whether or not the industry should or should not standardize on common terms and definitions is another subject altogether – and one that won’t be resolved any time soon. But, regardless of what it’s called the enterprise’s desire to better manage and control data is a hot topic, and deservedly so. But where does that leave Data Quality?
While not typically discussed in detail, it is often implied, and almost always inferred, that the successful deployment of MDM or Data Governance will achieve desired levels of data quality or data integrity. That’s just not the case. The arguments or analogies that are offered usually revolve around the concept of prevention. As this concept relates to the creation and maintenance of master data in the post-MDM environment, it is fundamentally sound. However, these new tools, processes, roles and responsibilities that have been developed and implemented to better manage and control data apply primarily to the new data. What about the significant volume of master data that existed in the legacy environment prior to the deployment of MDM? At time of MDM go-live, this legacy created master data compromises the vast majority, if not all, of the relevant master data – it drives the enterprise. In order to fully realize the MDM value proposition, there also needs to be a comprehensive undertaking to bring legacy created master data up to acceptable levels. In my experience, deploying data management without adequately addressing legacy data readiness is where the full promise of data management falls short.