348 research outputs found

    Some novel digital image filters for suppression of impulsive noise

    Get PDF
    In digital imaging, quality of image degrades due to contamination of various types of noise during the process of acquisition, transmission and storage. Especially impulse noise appears during image acquisition and transmission, which severely degrades the image quality and cause a great loss of information details in an image. Various filtering technique are found in literature for removal of impulse noise. Nonlinear filter such as standard median, weight median filter, center weight median and switching based median filter out perform the linear filters. This thesis investigates the performance analysis of different nonlinear filtering schemes. The performance of these filters can be improved by incorporating the mechanism of noise detection and then applying switching based adaptive filtering approach. Three novel filtering approaches that incorporate the above principles are proposed. It is found that all three approaches give noticeable performance improvement of over many filters reported in literature

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Image analysis of circulating fluidized bed hydrodynamics

    Get PDF
    The goal of this thesis is to design methods to estimate the local concentration and velocity of particles observed in digital videos of the inner wall of a circulating fluidized bed (CFB) riser. Understanding the dynamics of these rapidly moving particles will allow researchers to design cleaner and more efficient CFB facilities. However, the seemingly random motion exhibited by the particles in three dimensions, coupled with the varying image quality, make it difficult to extract the required information from the images. Given a video sequence, a method for detecting particles and tracking their spatial location is developed. By exploiting the presence of specular reflections, individual particles are first identified along the focal plane by an image filter specifically designed for this purpose. Once the particle locations are known, a local optical flow model is used to approximate the motion field across two images in order to track particles from one frame of the sequence to another. An evaluation of the proposed method indicates its potential to estimate particle count, location, concentration and velocity in an efficient and reliable manner. The method is fully automated and is expected to be an important analysis tool for researchers with the National Energy Technology Laboratory, part of the national laboratory system of the Department of Energy

    Text-image Restoration And Text Alignment For Multi-engine Optical Character Recognition Systems

    Get PDF
    Previous research showed that combining three different optical character recognition (OCR) engines (ExperVision® OCR, Scansoft OCR, and Abbyy® OCR) results using voting algorithms will get higher accuracy rate than each of the engines individually. While a voting algorithm has been realized, several aspects to automate and improve the accuracy rate needed further research. This thesis will focus on morphological image preprocessing and morphological text restoration that goes to OCR engines. This method is similar to the one used in restoration partial finger prints. Series of morphological dilating and eroding filters of various mask shapes and sizes were applied to text of different font sizes and types with various noises added. These images were then processed by the OCR engines, and based on these results successful combinations of text, noise, and filters were chosen. The thesis will also deal with the problem of text alignment. Each OCR engine has its own way of dealing with noise and corrupted characters; as a result, the output texts of OCR engines have different lengths and number of words. This in turn, makes it impossible to use spaces a delimiter as a method to separate the words for processing by the voting part of the system. Text aligning determines, using various techniques, what is an extra word, what is supposed to be two or more words instead of one, which words are missing in one document compared to the other, etc. Alignment algorithm is made up of a series of shifts in the two texts to determine which parts are similar and which are not. Since errors made by OCR engines are due to visual misrecognition, in addition to simple character comparison (equal or not), a technique was developed that allows comparison of characters based on how they look

    Text-image Restoration And Text Alignment For Multi-engine Optical Character Recognition Systems

    Get PDF
    Previous research showed that combining three different optical character recognition (OCR) engines (ExperVision® OCR, Scansoft OCR, and Abbyy® OCR) results using voting algorithms will get higher accuracy rate than each of the engines individually. While a voting algorithm has been realized, several aspects to automate and improve the accuracy rate needed further research. This thesis will focus on morphological image preprocessing and morphological text restoration that goes to OCR engines. This method is similar to the one used in restoration partial finger prints. Series of morphological dilating and eroding filters of various mask shapes and sizes were applied to text of different font sizes and types with various noises added. These images were then processed by the OCR engines, and based on these results successful combinations of text, noise, and filters were chosen. The thesis will also deal with the problem of text alignment. Each OCR engine has its own way of dealing with noise and corrupted characters; as a result, the output texts of OCR engines have different lengths and number of words. This in turn, makes it impossible to use spaces a delimiter as a method to separate the words for processing by the voting part of the system. Text aligning determines, using various techniques, what is an extra word, what is supposed to be two or more words instead of one, which words are missing in one document compared to the other, etc. Alignment algorithm is made up of a series of shifts in the two texts to determine which parts are similar and which are not. Since errors made by OCR engines are due to visual misrecognition, in addition to simple character comparison (equal or not), a technique was developed that allows comparison of characters based on how they look

    Quantification of the plant endoplasmic reticulum

    Get PDF
    One of the challenges of quantitative approaches to biological sciences is the lack of understanding of the interplay between form and function. Each cell is full of complex-shaped objects, which moreover change their form over time. To address this issue, we exploit recent advances in confocal microscopy, by using data collected from a series of optical sections taken at short regular intervals along the optical axis to reconstruct the Endoplasmic Reticulum (ER) in 3D, obtain its skeleton, then associate to each of its edges key geometric and dynamic characteristics obtained from the original filled in ER specimen. These properties include the total length, surface area, and volume of the ER specimen, as well as the length surface area, and volume of each of its branches. In a view to benefit from the well established graph theory algorithms, we abstract the obtained skeleton by a mathematical entity that is a graph. We achieve this by replacing the inner points in each edge in the skeleton by the line segment connecting its end points. We then attach to this graph the ER geometric properties as weights, allowing therefore a more precise quantitative characterisation, by thinning the filled in ER to its essential features. The graph plays a major role in this study and is the final and most abstract quantification of the ER. One of its advantages is that it serves as a geometric invariant, both in static and dynamic samples. Moreover, graph theoretic features, such as the number of vertices and their degrees, and the number of edges and their lengths are robust against different kinds of small perturbations. We propose a methodology to associate parameters such as surface areas and volumes to its individual edges and monitor their variations with time. One of the main contributions of this thesis is the use of the skeleton of the ER to analyse the trajectories of moving junctions using confocal digital videos. We report that the ER could be modeled by a network of connected cylinders (0.87μm±0.36 in diameter) with a majority of 3-way junctions. The average length, surface area and volume of an ER branch are found to be 2.78±2.04μm, 7.53±5.59μm2 and 1.81±1.86μm3 respectively. Using the analysis of variance technique we found that there are no significant differences in four different locations across the cell at 0.05 significance level. The apparent movement of the junctions in the plant ER consists of different types, namely: (a) the extension and shrinkage of tubules, and (b) the closing and opening of loops. The average velocity of a junction is found to be 0.25μm/sec±0.23 and lies in the range 0 to 1.7μm/sec which matches the reported actin filament range

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches
    corecore