Deprecated: get_settings is deprecated since version 2.1.0! Use get_option() instead. in /home/vemw27zv035v/public_html/wp-includes/functions.php on line 5324
An Ounce of Prevention (Or a Pound) is Worthless, and Why It’s Good to Brush : The Practitioner's Guide to Data Quality Improvement

An Ounce of Prevention (Or a Pound) is Worthless, and Why It’s Good to Brush

September 1, 2011 by
Filed under: Data Governance, Data Quality 

It was nice to see that Jim Harris referred to my earlier post questioning the experts’ pronouncements of the costs of poor data quality, and it triggered yet another thought about the perception of the value of data quality improvement envisioned as processes for prevention. The main idea is this: from the perspective of the individual paying for prevention, those processes are seen *only* as a cost, not a value.

Let’s look at it this way:

A problem occurs and has impact, then I can see the cost calculated in terms of the impact.

A problem does not occur and therefore has no impact, then there is no cost.

Get it? Because the problem did not occur, there were no impacts, and therefore the only cost was the cost of prevention. If you are aware of the specific problems that have been caught through your prevention processes, you will see the value. But if you are completely unaware, as most people are, then as far as you are concerned you are paying for a service that you are not sure you really need.

And this actually exposes yet another fallacy in the traditional data quality argument in reference to the significant value of resources *not spent* on fixing a problem. As one expert put it, (I am paraphrasing), “each error costs the company $50, so every error not made saves the company $50, and this quickly adds up to millions.”

That is actually not true. Each error not made is an error not made, so it does not incur the additional cost, but it certainly does not save money because the money only needs to be spent when the error occurs, when someone notices, and when someone cares about it. These expenditures are not expected, nor are they budgeted, and in many cases, the average cost per error or per flawed record is actually hard to quantify anyway (especially when data is repurposed multiple times). But believing that preventing errors will add millions of  dollars to the corporate bottom line is a bit ingenuous, and suggesting it is probably a questionable practice.

I would prefer to use a communications model for prevention like the one shown here. Knowing that some preventive action would mitigate this long-term effect is quite motivational. Could the same approach be used for data quality?


Comments are closed.