The Many Dimensions of Your Documents
I had dinner the other night with our CTO and our conversation was focused on why our technology was different, and what our Document Analytics patent brought to the table. Having been in the document capture industry for a long while, I was familiar with most of the technological “advances”. Things like automated extraction, pattern matching and classification through a variety of methods. I also knew with the acquisition craze, that much of the technology was for all intents and purposes, completely stagnant, with no true innovation for quite a while. In steps Document Analytics (DA).
First off, document analytics is part of Ephesoft’s core learning engine technology. So no matter your use case, the engine’s learning, multi-dimensional analysis and gleaned information are all available for use either through our end-user document capture application, platform APIs or our document analytics platform. So lets see what DA truly means.
Traditional document capture applications take digital documents, and if need be in the case of scanned images, do a conversion of the image to text through the process of Optical Character Recognition (OCR). This raw text can then be examined, and usually information is extracted based on simple location or a pattern match. This technique is great for simple data extraction, but when it comes to unstructured documents, without additional information, it can lead to erroneous data, and it can be quite limiting if you want a deeper understanding of your documents and data.
With document analytics, the goal is to gather multiple dimensions on not only your documents, but also what lies within. As you feed the learning system more documents, it learns continuously, and begins to gain understanding and predicting where key information is located. So lets look at all the dimensions a true document analytics engine can gather. Hold on, we are going full geek here.
- Root-stemming – most technology in the market looks at individual words on a page. True meaning comes in groups of words, and the analysis of their roots. Take for example an analysis of mortgage documents. the term borrower becomes extremely important, but unfortunately, when extracting data or searching for core data, you may encounter multiple forms: borrower, name of borrower, borrowed by, borrowing party, etc. Being able to identify a core root, borrow, and being tolerant of variations becomes extremely important, as does the ability to assign a confidence level to identified groups.
- Relative Positions – gathering relative position information about words within a document can provide great insight. As we continue our borrower example, knowing that in the phrase “Borrowing Name” that name follows borrowing gives us insight, and helps in our quest for valuable data. Once again, this adds to our confidence that our data is being collected correctly.
- Typographical Characteristics – understanding font, font size and other characteristics of words, can help us understand document structure. For example, a fillable PDF form we download from our medical insurance company will have a font for all the anchors: Patient Name, SSN, Address, etc. When we fill this form out, we enter our information with another font, perhaps in all capitals. This minor difference can provide meaning, and a better understanding of the structure of a document.
- Value Identity – when analyzing documents, knowing conventions in data can aid in dimensional analysis. Take for example the social security number standard: NNN-NN-NNNN. Knowing this pattern, and using other dimensions, like position, can help us “learn” about documents. How? So, when we find this pattern on the page, we can look before it and above it to understand how it is identified. It would be prefaced by SSN, SSN:, Social:, SS Number, etc. Once we understand how SSNs are anchored, now we can understand how other data may be anchored as well.
- Imprecision and Fuzziness – people are not perfect and neither is technology, and DA requires adaption to “imperfect” information. Take a zip code that is entered or read as 9H010. Well, we know that this data was prefaced by “Zip Code”, and we know they should be 5 numerical digits. We also know that an OCR engine can sometimes confuse a 4 and H depending on font type. Getting the drift here? By taking all our dimensions into account, we can say: this was locationally after “Zip Code:”, it is 5 characters, and I know sometimes 4 and Hs can be interchangeable for this font type. Therefore I can say with 90% confidence this is in fact a zip that had been misread or mis-entered.
- Value Quantization – in gathering data, we know that certain words are most likely data that we will find interesting. Numbers (whole, 69010, or character delimited, 123-45-6789), dates (01/12/2001), and so forth are likely to be interesting data values that we need to extract, or will be required in our analysis. Taking this into account can help our confidence and accuracy.
- Page Zones – in examining a document, certain areas, or zones, of a page usually contain important information. For instance, a contract will almost always have key participants in the top quarter of the first page. An invoice will have its total amount in the bottom half. Using this area analysis can help us identify key information, and add to our confidence in data extraction.
- Page Numbers – As a human, I know that much of the important information will be on specific pages in documents I access on a day-to-day basis. But maybe a certain type of application has key information on page 3. Understanding and identifying core pages with critical data will provide added insight and aid in analytics.
- Fixed Value Location – one key in learning documents is to examine and define text blocks of interest. Once these page areas are defined, the system can better understand layout and the design of document, and help predict where key data may be located.
This is just an overview of how we can make sense of unstructured information through advanced learning and analytics. If you want to go deeper, you can read the patent here: