552 research outputs found

    A Mask-Based Enhancement Method for Historical Documents

    Get PDF
    This paper proposes a novel method for document enhancement. The method is based on the combination of two state-of-the-art filters through the construction of a mask. The mask is applied to a TV (Total Variation) -regularized image where background noise has been reduced. The masked image is then filtered by NLmeans (Non-Local Means) which reduces the noise in the text areas located by the mask. The document images to be enhanced are real historical documents from several periods which include several defects in their background. These defects result from scanning, paper aging and bleed-through. We observe the improvement of this enhancement method through OCR accuracy

    Combining Haar Wavelet and Karhunen Loeve Transforms for Medical Images Watermarking

    No full text
    International audienceThis paper presents a novel watermarking method, applied to the medical imaging domain, used to embed the patient's data into the corresponding image or set of images used for the diagnosis. The main objective behind the proposed technique is to perform the watermarking of the medical images in such a way that the three main attributes of the hidden information (i.e. imperceptibility, robustness, and integration rate) can be jointly ameliorated as much as possible. These attributes determine the effectiveness of the watermark, resistance to external attacks and increase the integration rate. In order to improve the robustness, a combination of the characteristics of Discrete Wavelet and Karhunen Loeve Transforms is proposed. The Karhunen Loeve Transform is applied on the sub-blocks (sized 8x8) of the different wavelet coefficients (in the HL2, LH2 and HH2 subbands). In this manner, the watermark will be adapted according to the energy values of each of the Karhunen Loeve components, with the aim of ensuring a better watermark extraction under various types of attacks. For the correct identification of inserted data, the use of an Errors Correcting Code (ECC) mechanism is required for the check and, if possible, the correction of errors introduced into the inserted data. Concerning the enhancement of the imperceptibility factor, the main goal is to determine the optimal value of the visibility factor, which depends on several parameters of the DWT and the KLT transforms. As a first step, a Fuzzy Inference System (FIS) has been set up and then applied to determine an initial visibility factor value. Several features extracted from the Co-Occurrence matrix are used as an input to the FIS and used to determine an initial visibility factor for each block; these values are subsequently re-weighted in function of the eigenvalues extracted from each sub-block. Regarding the integration rate, the previous works insert one bit per coefficient. In our proposal, the integration of the data to be hidden is 3 bits per coefficient so that we increase the integration rate by a factor of magnitude 3

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Evaluation of a Change Detection Methodology by Means of Binary Thresholding Algorithms and Informational Fusion Processes

    Get PDF
    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution

    A model based on local graphs for colour images and its application for Gaussian noise smoothing

    Full text link
    [EN] In this paper, a new model for processing colour images is presented. A graph is built for each image pixel taking into account some constraints on links. Each pixel is characterized depending on the features of its related graph, which allows to process it appropriately. As an example, we provide a characterization of each pixel based on the link cardinality of its connected component. This feature enables us to properly distinguish flat image regions respect to edge and detail regions. According to this, we have designed a hybrid filter for colour image smoothing. It combines a filter able to properly process flat image regions with another one that is more appropriate for details and texture. Experimental results show that our model performs appropriately. We also see that our proposed filter is competitive with respect to state-of-the-art methods. It is close closer to the corresponding optimal switching filter respect to other analogous hybrid method.Samuel Morillas acknowledges the support of grant MTM2015-64373-P (MINECO/FEDER, UE). Cristina Jordan acknowledges the support of grant TEC2016-79884-C2-2-R.Pérez-Benito, C.; Morillas, S.; Jordan-Lluch, C.; Conejero, JA. (2018). A model based on local graphs for colour images and its application for Gaussian noise smoothing. Journal of Computational and Applied Mathematics. 330:955-964. https://doi.org/10.1016/j.cam.2017.05.013S95596433

    Imaging time series for the classification of EMI discharge sources

    Get PDF
    In this work, we aim to classify a wider range of Electromagnetic Interference (EMI) discharge sources collected from new power plant sites across multiple assets. This engenders a more complex and challenging classification task. The study involves an investigation and development of new and improved feature extraction and data dimension reduction algorithms based on image processing techniques. The approach is to exploit the Gramian Angular Field technique to map the measured EMI time signals to an image, from which the significant information is extracted while removing redundancy. The image of each discharge type contains a unique fingerprint. Two feature reduction methods called the Local Binary Pattern (LBP) and the Local Phase Quantisation (LPQ) are then used within the mapped images. This provides feature vectors that can be implemented into a Random Forest (RF) classifier. The performance of a previous and the two new proposed methods, on the new database set, is compared in terms of classification accuracy, precision, recall, and F-measure. Results show that the new methods have a higher performance than the previous one, where LBP features achieve the best outcome

    Comprehensive Review on Detection and Classification of Power Quality Disturbances in Utility Grid With Renewable Energy Penetration

    Get PDF
    The global concern with power quality is increasing due to the penetration of renewable energy (RE) sources to cater the energy demands and meet de-carbonization targets. Power quality (PQ) disturbances are found to be more predominant with RE penetration due to the variable outputs and interfacing converters. There is a need to recognize and mitigate PQ disturbances to supply clean power to the consumer. This article presents a critical review of techniques used for detection and classification PQ disturbances in the utility grid with renewable energy penetration. The broad perspective of this review paper is to provide various concepts utilized for extraction of the features to detect and classify the PQ disturbances even in the noisy environment. More than 220 research publications have been critically reviewed, classified and listed for quick reference of the engineers, scientists and academicians working in the power quality area

    Image processing mini manual

    Get PDF
    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC
    • …
    corecore