Given a list of data domains that were critical to the success operation of a set of business processes, we start to get a picture of the interdependence of many applications on the same conceptual data. In our last discussion, we came to the conclusion that a top-down consideration of the value of quality data to specific activities would result in a list of dependent data domains for each activity.
We can collate these lists and map them in the opposite direction: provide a list of business activities that rely on each data domain. In turn, this will start to highlight which data domains (aside from the standard “customer” and “product” domains that we all know and love) have what you might call “high visibility” in the organization. These highly visible data domains are candidate master data sets, although additional characterization might provide greater insight into the utility, value, and feasibility of creating those master data sets, such as the number of applications that create/read/modify the data domain, the frequency of creates and modifications, or the frequency of reads. But an interesting attribute for which our evaluation process yields information detail aspects of the corresponding valuation of the data with respect to the original business activities.
To recall the example we have used regarding the single activity of painting offices and its dependence on facility, office, employee, and materials data sets (among others), we can look at the value gap associated with variance, duplication, or inconsistency of data. Tallying both the objective numbers (such as number of dependent applications) along with the subjective numbers (cumulative value at risk totaled across the dependent applications) will highlight those candidate data sets whose “mastering” would have the greatest positive impacts.
I am looking to refine this approach over the next few months – any comments or suggestions would be welcome!