92 research outputs found

    Automated Algorithm for the Identification of Artifacts in Mottled and Noisy Images

    Get PDF
    We describe a method for automatically classifying image-quality defects on printed documents. The proposed approach accepts a scanned image where the defect has been localized a priori and performs several appropriate image processing steps to reveal the region of interest. A mask is then created from the exposed region to identify bright outliers. Morphological reconstruction techniques are then applied to emphasize relevant local attributes. The classification of the defects is accomplished via a customized tree classifier that utilizes size or shape attributes at corresponding nodes to yield appropriate binary decisions. Applications of this process include automated/assisted diagnosis and repair of printers/copiers in the field in a timely fashion. The proposed technique was tested on a database of 276 images of synthetic and real-life defects with 94.95% accuracy

    Automatic image registration and defect identification of a class of structural artifacts in printed documents

    Get PDF
    The work in this thesis proposes a defect analysis system, which automatically aligns a digitized copy of a printed output to a reference electronic original and highlights image defects. We focus on a class of image defects or artifacts caused by shortfalls in the mechanical or electro-photographic processes that include spots, deletions and debris missing deletions. The algorithm begins with image registration performed using a logpolar transformation and mutual information techniques. A confidence map is then calculated by comparing the contrast and entropy in the neighborhood of each pixel in both the printed document and corresponding electronic original. This results in a qualitative difference map of the two images highlighting the detected defects. The algorithm was demonstrated successfully on a collection of 99 printed images based on 11 original electronic images and test patterns printed on 9 different faulty printers provided by Xerox Corporation. The proposed algorithm is effective in aligning digitized printed output irrespective of translation, rotation and scale variations, and identifying defects in color inconsistent hardcopies

    Automatic multi-resolution spatio-frequency mottle metric (sfmm) for evaluation of macrouniformity

    Get PDF
    Evaluation of mottle is an area of on-going research in print quality assessment. We propose an unsupervised evaluation technique and a metric that measures mottle in a hard-copy laser print. The proposed algorithm uses a scanned image to quantify the low frequency variation or mottle in what is supposed to be a uniform field. `Banding\u27 and `Streaking\u27 effects are explicitly ignored and the proposed algorithm scales the test targets from Flat print (Good) to Noisy print (Bad) based on mottle only. The evaluation procedure is modeled as feature computation in different combinations of spatial, frequency and wavelet domains. The model is primarily independent of the nature of the input test target, i.e. whether it is chromatic or achromatic. The algorithm adapts accordingly and provides a mottle metric for any test target. The evaluation process is done using three major modules: (1) Pre-processing Stage, which includes acquisition of the test target and preparing it for processing; (2) Spatio-frequency Parameter Estimation where different features characterizing mottle are calculated in spatial and frequency domains; (3) Invalid Feature Removal Stage, where the invalid or insignificant features (in context to mottle) are eliminated and the dataset is ranked relatively. The algorithm was demonstrated successfully on a collection of 60 K-Only printed images spread over 2 datasets printed on 3 different faulty printers and 4 different media Also, it was tested on 5 color targets for the color version of the algorithm printed using 2 different printers and 5 different media, provided by Hewlett Packard Company

    Automated quantification and classification of human kidney microstructures obtained by optical coherence tomography

    Get PDF
    Optical coherence tomography (OCT) is a rapidly emerging imaging modality that can non-invasively provide cross-sectional, high-resolution images of tissue morphology such as kidney in situ and in real-time. Because the viability of a donor kidney is closely correlated with its tubular morphology, and a large amount of image datasets are expected when using OCT to scan the entire kidney, it is necessary to develop automated image analysis methods to quantify the spatially-resolved morphometric parameters such as tubular diameter, and to classify various microstructures. In this study, we imaged the human kidney in vitro, quantified the diameters of hollow structures such as blood vessels and uriniferous tubules, and classified those structures automatically. The quantification accuracy was validated. This work can enable studies to determine the clinical utility of OCT for kidney imaging, as well as studies to evaluate kidney morphology as a biomarker for assessing kidney's viability prior to transplantation

    Mineralogical mapping using airborne imaging spectrometry data

    Get PDF
    With the development of airborne, high spectral resolution imaging spectrometers, we now have a tool, that allows us to examine surface materials with enough spectral detail to identify them. Identification is based on the analysis of position and shape of absorption features in the material spectra in the visible and infrared (0.4µm to 2.5µm). These absorption features are caused by the interaction of Electro-Magnetic Radiation (EMR) with the atoms and molecules of the surface material. Airborne data were collected to evaluate these new high spectral resolution systems. The data quality was assessed prior to processing and analysis and several problems were noted for each data set (striping, geometric distortion, etc.). These problems required some preparation of the data. After data preparation, data processing methods were evaluated, concentrating primarily on the log residuals and hull quotients methods. The processing steps convert the data to a form suitable for analysis. The data was analysed using the Spectral Analysis Manager (SPAM) package, developed by JPL. Two Imaging spectrometers were evaluated. The AIS - 1 instrument was flown over an area in Queensland, Australia. Ground data and laboratory work confirmed the presence of anomalous areas detected by the instrument. The data quality was poor and only basic classification of the data was possible. Anomalies were classed as "GREEN VEGETATION", "DRY VEGETATION", "CLAY" or "CARBONATE" based on the position of the major absorptions observed. The second instrument, the GER - II was flown over an area of Nevada, USA. Ground data and laboratory work confirmed the presence of the anomalies detected by the instrument. The data quality was somewhat better. Identification of sericite, dolomite and illite was possible. However, most of the area could still only be classed in the broad groupings listed above. To conclude, the effectiveness of identification is limited to a large degree by the poor data quality. If the data quality can be improved, techniques can be applied to automatically locate and identify material spectra, from the airborne data alone

    The Application of Machine Learning to At-Risk Cultural Heritage Image Data

    Get PDF
    This project investigates the application of Convolutional Neural Network (CNN) methods and technologies to problems related to At-Risk cultural heritage object recognition. The primary aim for this work is the use of developmental software combining the disciplines of computer vision and artefact studies, developing applications in the field of heritage protection specifically related to the illegal antiquities market. To accomplish this digital image data provided by the Durham University Oriental Museum was used in conjunction with several different implementations of pre-trained CNN software models, for the purposes of artefact Classification and Identification. Testing focused on data capture using a variety of digital recording devices, guided by the developmental needs of a heritage programme seeking to create software solutions to heritage threats in the Middle East and North Africa (MENA) region. Quantitative data results using information retrieval metrics is reported for all model and test sets, and has been used to evaluate the models predictive results

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution

    The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey.

    Get PDF
    Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications

    Modified belief propagation for reconstruction of office environments

    Get PDF
    Belief Propagation (BP) is an algorithm that has found broad application in many areas of computer science. The range of these areas includes Error Correcting Codes, Kalman filters, particle filters, and -- most relevantly -- stereo computer vision. Many of the currently best algorithms for stereo vision benchmarks, e.g. the Middlebury dataset, use Belief Propagation. This dissertation describes improvements to the core algorithm to improve its applicability and usefulness for computer vision applications. A Belief Propagation solution to a computer vision problem is commonly based on specification of a Markov Random Field that it optimizes. Both Markov Random Fields and Belief Propagation have at their core some definition of nodes and neighborhoods' for each node. Each node has a subset of the other nodes defined to be its neighborhood. In common usages for stereo computer vision, the neighborhoods are defined as a pixel's immediate four spatial neighbors. For any given node, this neighborhood definition may or may not be correct for the specific scene. In a setting with video cameras, I expand the neighborhood definition to include corresponding nodes in temporal neighborhoods in addition to spatial neighborhoods. This amplifies the problem of erroneous neighborhood assignments. Part of this dissertation addresses the erroneous neighborhood assignment problem. Often, no single algorithm is always the best. The Markov Random Field formulation appears amiable to integration of other algorithms: I explore that potential here by integrating priors from independent algorithms. This dissertation makes core improvements to BP such that it is more robust to erroneous neighborhood assignments, is more robust in regions with inputs that are near-uniform, and can be biased in a sensitive manner towards higher level priors. These core improvements are demonstrated by the presented results: application to office environments, real-world datasets, and benchmark datasets

    Advanced Signal Processing for Thermal Flaw Detection

    Full text link
    • …
    corecore