0

Scoping Failure Analysis

In adapting the six-sigma technique of failure mode and effects analysis for data quality management, we are hoping to proactively identify the potential errors that lead to the most severe business impacts and then strengthen the processes and applications to prevent errors from being introduced in the first place. In my last post, though, I noted that the approach to this analysis starts with the errors and then figures out the impacts. I think we should go the other way so as to optimize the effort and reduce the analysis time to focus on the most important potentialities.

First, I suggest going back to the articles I worked on with Informatica at the beginning of the year as a way of grounding and level-setting:

These papers look at potential business impacts associated with data quality issues and help in evaluating potential severity. Next, I would revisit the failure mode analysis (that we have already adjusted to accommodate data errors) and this time change the sequence of the process:

  • Identify the most severe impacts that could be attributable to potential data errors,
  • Categorize and quantify the impacts of each specific potential data error,
  • Speculate on the potential frequency of error occurrence,
  • Consider the potential causes of the data errors,
  • Look at each stage of the process and determine if any of the potential errors can be introduced at that point,
  • Propose ways to prevent the errors from being introduced in the first place at each processing stage
  • Specify methods to determine if and when the error has occurred, and
  • Document who is responsible for addressing the error (or reducing the impacts) if it is ever introduced.

When I start out saying “most severe,” I really mean setting a baseline level of severity to be used as the benchmark for evaluation. And by changing the sequence, I am suggesting that we can limit the scope of the analysis to the potential errors that can lead to material impacts, but not spending too much time trying to prevent errors that have minimal business impact.

And this is particularly appealing to my prognosticator friends – by working back from the end-user community and assessing their perception of data error failure modes, you really are being proactive in error prevention, yet you are scoping the world of errors to those that people actually care about.

FacebookTwitterLinkedInEmailPrintShare
This entry was posted in Data Quality and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>