Many approaches exist for alphanumeric data acquisition from images, however very few make mention of engraved text and none to the knowledge of the author make mention of cemetery headstones. The ability to extract the textual information found on headstones is of great importance due to the valuable historical information that is, in most cases, authoritative and unique to that stone. Multiple groups continue to put forth great effort to index such data through a manual transcription process. Although the engraved characters are often of a common font (allowing for accurate recognition in an OCR engine), cemetery headstones present unique challenges to recognition that typical scene text images (street signs, billboards, etc.) do not encounter such as engraved or embossed characters (causing inner-character shadows), low contrast with the background, and significant noise due to weathering. We therefore propose a system in which the noisy background is removed through use of gradient orientation histograms, an ensemble of neural networks, and an automated implementation of graph cut, resulting in a clean image of the headstone’s textual information
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.