The True Cost of Bad Data
In the world of document capture and analytics, our typical value proposition is around efficiency, reduction in required headcount and the reduction in turnaround time. Of course, there is true value and cost savings for any organization processing a significant volume of documents if you focus on these value points. Lately, we have been having some great conversations both internally and externally on the true cost of errors in data entry, and I wanted to dig deep into my past, and present a key topic for discussion.
Back in my Navy days, I found myself in the center of a focus on quality, and we had morphed Deming’s Total Quality Management (TQM) into a flavor that would serve us well. In a nutshell, it was an effort to increase quality through a systematic analysis of process, and a continuous improvement cycle. It focused on reducing “Defects” in process, with the ultimate goal of eliminating them all together. Defects impose a high cost on the organization, and can lead to failures across the board. Today, all these concepts can be applied to the processing of documents and their associated data. What is the true value of preventing defects in data?
In my education in this topic, I remember a core concept on quality, and defects: the 1-10-100 rule.
The rule gives us a graphic representation of the escalating cost of errors (or failures), with prevention costing a $1, correction $10 and failure $100. So, in terms of data:
- Prevention Cost – Preventing an error in data at the point of extraction will cost you a $1.
- Correction Cost – Having someone correct an error post extraction will cost you $10.
- Failure Cost – Letting bad data run through a process to its end resting place will cost you $100.
So, an ounce of prevention is worth a pound of cure. In this case, the lack of technology to prevent data errors in the first place will cost the business 100x the cost of acquiring an automated technology that can prevent errors in the first place.
In document capture today, we focus on the top rung of the pyramid, and in prevention. Below are the core benefits of an intelligent capture platform:
- Errors in data can be prevented through the automated extraction of data, setting of business rules, and the elimination of hand-keying data.
- Existing data sources in the organization can be used to enhance prevention, and insure data validation through the use of Fuzzy DB technology.
- Adding review and validation capabilities prevent bad data from slipping through the process and contaminating your data store. This is invaluable and prevents the ripple effect bad information can have on a process and the organization as a whole.
- With machine learning technology, if correction is required, the system learns, and can prevent future corrections, reducing costs.
See more features for insuring high quality document processing and data extraction here: Ephesoft Document Capture and Data Extraction.
Just some thoughts…more to come on this topic.