Jim Harris and Phil Simon have put together a nice radio series on data management and data quality, definitely worth a listen. Check it out!
I made up that word, and it stands for “funding conundrum.” I was actually starting to think about this after my other recent post on data quality and dieting. It is the problem of establishing continuity in good data quality practices long after the issues are gone.
Let’s say that our company is directly impacted by a number of data quality issues where errors lead to unexpected or undesirable results. After quantifying the negative impact and doing a cost/benefit analysis for an investment in data quality consultation, processes, and tools, we are able to determine that there is value in finding and fixing the failed processes that introduce errors into the environment. We are good to go…
A few weeks back I taught a course on data quality at the Data Warehousing Institute world conference in Las Vegas, and we were reviewing aspects of the life cycle of a data quality program. I commented that I thought that there was a similarity to dieting. When a person wants to lose weight, they are initially motivated and will invest a large amount of enthusiasm in tasks designed to drop the weight. These efforts provide reasonable benefits at the front end, when simple activities can have dramatic impacts (such as losing a lot of weight during the first month). Data quality can often be the same…
Last week I attended the Data Warehousing Institute’s World Conference in Las Vegas, teaching a class on Practical Data Quality Management. As part of the discussion on critical data elements, I suggested some alternatives for qualifying a data element as “critical,” including “presence on a published report.” In turn, I pointed out an interesting notion that often does not occur to many data analysts, but does require some attention: monitoring the quality of non-persistent data.
I have just finished a paper sponsored by Informatica titled “Understanding the Financial Value of Data Quality Improvement,” which looks at bridging the communications gap between technologists and business people regarding the value of data quality improvements. Here is a summary:
As opposed to the technical aspects of data validation and cleansing, often the biggest challenge in beginning a data quality program is effectively communicating the business value of data quality improvement. But using a well-defined process for considering the different types of costs and risks of low-quality data not only provides a framework for putting data quality expectations into a business context, it also enables the definition of clear metrics linking data quality to business performance. For example, it is easy to speculate that data errors impede up-selling and cross-selling, but to really justify the need for a data quality improvement effort, a more comprehensive quantification of the number of sales impacted or of the total dollar amount for the missed opportunity can be much more effective at showing the value gap.
This article looks at different classifications of financial impacts and corresponding performance measures that enables a process for evaluating the relationship between acceptable performance and quality information. This article is targeted to analysts looking to connect high quality information and optimal business performance to make a quantifiable case for data quality improvement.