Last time we looked at the failure mode and effect analysis technique from the six-sigma community and slightly adjusted it to be data-centric so that it can be used to anticipate the different types of data errors that could occur and adjust application design to accommodate the prevention of data errors in the first place. This approach really is proactive since you are proactively considering the many different types of errors that could be introduce and then shoring up the process in anticipation of their occurrence.
I think this is a great approach, except for one potential barrier to success. The approach presumes that one can examine all the stages in a process and speculate the many different ways that data errors could be introduced, and then think about the impacts and severity. However, this also assumes that you have unlimited time and resources to sit around and consider the different failure opportunities. This is what I refer to in the title of this post: this is a myth. We don’t have unlimited time to consider all the ways that a process can fail; we are likely to have some limitations on this speculation opportunity, and if it were up to me, I would want to consider the most critical failure or error situations and ensure that we are proactive in addressing them.
I would adjust the approach to the failure mode analysis to introduce a method for prioritization of potential data errors. My recommendation is that instead of considering all the potential error types first and then calculating the impacts, figure out the types of errors that will lead to severe impacts and scan the process for stages in which those errors can be introduced.