215 research outputs found

    A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing

    Full text link
    The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem [M. Mignotte. An energy-based model for the image edge-histogram specification problem. IEEE Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently

    Cultural Heritage Destruction: Experiments with Parchment and Multispectral Imaging

    Get PDF
    This chapter describes a highly collaborative project in digital humanities, which used tools and expertise from a diverse range of disciplines: medical physics, image science, and conservation. We describe this collaboration through three examples: the use of phantoms taken from medical physics, a historically accurate model of parchment degradation, and a detailed description of the steps taken to run experiments and collect data within a manageable budget. Each example highlights how procedures from a discipline were adapted for the project through collaboration. Whilst conservation focuses on developing methods to best preserve cultural heritage documents, we describe an unusual collaboration between conservation and image science to document through multispectral imaging the deliberate damage of a manuscript. Multispectral imaging has been utilised to examine cultural heritage documents by providing information about their physical properties. However, current digitisation efforts concentrate on recording documents in their current state. In this project, we aimed at recording the process of macroscopic document degradation using multispectral imaging, and the digital recovery of the writing using standard image processing methodologies. This project’s success lay in the intersection of knowledge of the processes of parchment deterioration and the specific processes that occur when a document is imaged: this has permitted us to construct a more successful and informed experiment. The knowledge acquired during the project allows us to address the issues related to the recovery of information from damaged parchment documents, and to determine which research questions can be addressed, and through which imaging methodology

    Image registration: Features and applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    The value of critical destruction:Evaluating multispectral image processing methods for the analysis of primary historical texts

    Get PDF
    Multispectral imaging – a method for acquiring image data over a series of wavelengths across the light spectrum – is becoming a valuable tool within the cultural and heritage sector for the recovery and enhancement of information contained within primary historical texts. However, most applications of this technique, to date, have been bespoke: analysing particular documents of historic importance. There has been little prior work done on evaluating this technique in a structured fashion, to provide recommendations on how best to capture and process images when working with damaged and abraded textual material. This paper introduces a new approach for evaluating the efficacy of image processing algorithms in recovering information from multispectral images of deteriorated primary historical texts. We present a series of experiments that deliberately degrade samples cut from a real historical document to provide a set of images acquired before and after damage. These images then allow us to compare, both objectively and quantitatively, the effectiveness of multispectral imaging and image processing for recovering information from damaged text. We develop a methodological framework for the continuing study of the techniques involved in the analysis and processing of multispectral images of primary historical texts, and a dataset which will be of use to others interested in advanced digitisation techniques within the cultural heritage sector

    A Multiple-Expert Binarization Framework for Multispectral Images

    Full text link
    In this work, a multiple-expert binarization framework for multispectral images is proposed. The framework is based on a constrained subspace selection limited to the spectral bands combined with state-of-the-art gray-level binarization methods. The framework uses a binarization wrapper to enhance the performance of the gray-level binarization. Nonlinear preprocessing of the individual spectral bands is used to enhance the textual information. An evolutionary optimizer is considered to obtain the optimal and some suboptimal 3-band subspaces from which an ensemble of experts is then formed. The framework is applied to a ground truth multispectral dataset with promising results. In addition, a generalization to the cross-validation approach is developed that not only evaluates generalizability of the framework, it also provides a practical instance of the selected experts that could be then applied to unseen inputs despite the small size of the given ground truth dataset.Comment: 12 pages, 8 figures, 6 tables. Presented at ICDAR'1
    corecore