830 research outputs found

    Study Of Gaussian & Impulsive Noise Suppression Schemes In Images

    Get PDF
    Noise is introduced into images usually while transferring and acquiring them.The main type of noise added while image acquisition is called Gaussian noise while Impulsive noise is generally introduced while transmitting image data over an unsecure communication channel , while it can also be added by acquiring. Gaussian noise is a set of values taken from a zero mean Gaussian distribution which are added to each pixel value. Impulsive noise involves changing a part of the pixel values with random ones. Various techniques are employed for the removal of these types of noise based on the properties of their respective noise models. Impulse Noise removal algorithms popularly use ordered statistics based ¯lters. The ¯rst one is an adaptive ¯lter using center-weighted median. In this method, the di®erence of the center weighted mean of a neighborhood with the central pixel under consideration is compared with a set of thresholds. Another method which takes into account the presence of the noise free pixels has been implemented.It convolutes the median of each neighborhood with a set of convolution kernels which are oriented according to all possible con¯gurations of edges that contain the central pixel,if it lies on an edge. A third method which deals with the detection of noisy pixels on the binary slices of an image is implemented. It is based on threshold Boolean ¯ltering. The ¯lter inverts the value of the central pixel if the number of pixels with values opposite to it is more than the threshold. The fourth method has an e±cient double derivative detector, which gives a de- cision based on the value of the double derivative. The substitution is done with the average gray scale value of the neighborhood. Gaussian Noise removal algorithms ideally should smooth the distinct parts of the image without blurring the edges.A universal noise removing scheme is implemented which weighs each pixel with respect to its neighborhood and deals with Gaussian and impulse noise pixels di®erently based on parameter values for spatial, radiometric and impulsive weight of the central pixel. The aforementioned techniques are implemented and their results are compared subjectively as well as objectively

    High Quality 3D Shape Reconstruction via Digital Refocusing and Pupil Apodization in Multi-wavelength Holographic Interferometry.

    Full text link
    Multi-wavelength holographic interferometry (MWHI) has good potential for evolving into a high quality 3D shape reconstruction technique. There are several remaining challenges, including 1) depth-of-field limitation, leading to axial dimension inaccuracy of out-of-focus objects; and 2) smearing from shiny smooth objects to their dark dull neighbors, generating fake measurements within the dark area. This research is motivated by the goal of developing an advanced optical metrology system that provides accurate 3D profiles for target object or objects of axial dimension larger than the depth-of-field, and for objects with dramatically different surface conditions. The idea of employing digital refocusing in MWHI has been proposed as a solution to the depth-of-field limitation. One the one hand, traditional single wavelength refocusing formula is revised to reduce sensitivity to wavelength error. Investigation over real example demonstrates promising accuracy and repeatability of reconstructed 3D profiles. On the other hand, a phase contrast based focus detection criterion is developed especially for MWHI, which overcomes the problem of phase unwrapping. The combination for these two innovations gives birth to a systematic strategy of acquiring high quality 3D profiles. Following the first phase contrast based focus detection step, interferometric distance measurement by MWHI is implemented as a next step to conduct relative focus detection with high accuracy. This strategy results in ±100mm 3D profile with micron level axial accuracy, which is not available in traditional extended focus image (EFI) solutions. Pupil apodization has been implemented to address the second challenge of smearing. The process of reflective rough surface inspection has been mathematically modeled, which explains the origin of stray light and the necessity of replacing hard-edged pupil with one of gradually attenuating transmission (apodization). Metrics to optimize pupil types and parameters have been chosen especially for MWHI. A Gaussian apodized pupil has been installed and tested. A reduction of smearing in measurement result has been experimentally demonstrated.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91461/1/xulium_1.pd

    Filtering Enhanced Traffic Management System (ETMS) Altitude Data

    Get PDF
    Enhanced Traffic Management System (ETMS) stores all the information gathered by the Federal Aviation Administration (FAA) from aircraft flying in the US airspace. The data stored from each flight includes the 4D trajectory (latitude, longitude, altitude and timestamp), radar data and flight plan information. Unfortunately, there is a data quality problem in the vertical channel and the altitude component of the trajectories contains some isolated samples in which a wrong value was stored. Overall, the data is generally accurate and it was found that only 0.3% of the altitude values were incorrect, however the impact of these erroneous data in some analyses could be important, motivating the development of a filtering procedure. The approach developed for filtering ETMS altitude data includes some specific algorithms for problems found in this particular dataset, and a novel filter to correct isolated bad samples (named Despeckle filter). As a result, all altitude errors were eliminated in 99.7% of the flights affected by noise, while preserving the original values of the samples without bad data. The algorithm presented in this paper attains better results than standard filters such as the median filter, and it could be applied to any signal affected by noise in the form of spikes

    A parallel windowing approach to the Hough transform for line segment detection

    Get PDF
    In the wide range of image processing and computer vision problems, line segment detection has always been among the most critical headlines. Detection of primitives such as linear features and straight edges has diverse applications in many image understanding and perception tasks. The research presented in this dissertation is a contribution to the detection of straight-line segments by identifying the location of their endpoints within a two-dimensional digital image. The proposed method is based on a unique domain-crossing approach that takes both image and parameter domain information into consideration. First, the straight-line parameters, i.e. location and orientation, have been identified using an advanced Fourier-based Hough transform. As well as producing more accurate and robust detection of straight-lines, this method has been proven to have better efficiency in terms of computational time in comparison with the standard Hough transform. Second, for each straight-line a window-of-interest is designed in the image domain and the disturbance caused by the other neighbouring segments is removed to capture the Hough transform buttery of the target segment. In this way, for each straight-line a separate buttery is constructed. The boundary of the buttery wings are further smoothed and approximated by a curve fitting approach. Finally, segments endpoints were identified using buttery boundary points and the Hough transform peak. Experimental results on synthetic and real images have shown that the proposed method enjoys a superior performance compared with the existing similar representative works

    Endoscopic image analysis of aberrant crypt foci

    Get PDF
    Tese de Mestrado Integrado. Bioengenharia. Faculdade de Engenharia. Universidade do Porto. 201

    Bilateral filter in image processing

    Get PDF
    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges. It has shown to be an effective image denoising technique. It also can be applied to the blocking artifacts reduction. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. Another research interest of bilateral filter is acceleration of the computation speed. There are three main contributions of this thesis. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising. I propose an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the low-frequency sub-bands of a signal decomposed using a wavelet filter bank. The multi resolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. The second contribution is that I present a spatially adaptive method to reduce compression artifacts. To avoid over-smoothing texture regions and to effectively eliminate blocking and ringing artifacts, in this paper, texture regions and block boundary discontinuities are first detected; these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. The test results prove that the adaptive method can improve the quality of restored images significantly better than the standard bilateral filter. The third contribution is the improvement of the fast bilateral filter, in which I use a combination of multi windows to approximate the Gaussian filter more precisely

    Recognition of Nonideal Iris Images Using Shape Guided Approach and Game Theory

    Get PDF
    Most state-of-the-art iris recognition algorithms claim to perform with a very high recognition accuracy in a strictly controlled environment. However, their recognition accuracies significantly decrease when the acquired images are affected by different noise factors including motion blur, camera diffusion, head movement, gaze direction, camera angle, reflections, contrast, luminosity, eyelid and eyelash occlusions, and problems due to contraction and dilation. The main objective of this thesis is to develop a nonideal iris recognition system by using active contour methods, Genetic Algorithms (GAs), shape guided model, Adaptive Asymmetrical Support Vector Machines (AASVMs) and Game Theory (GT). In this thesis, the proposed iris recognition method is divided into two phases: (1) cooperative iris recognition, and (2) noncooperative iris recognition. While most state-of-the-art iris recognition algorithms have focused on the preprocessing of iris images, recently, important new directions have been identified in iris biometrics research. These include optimal feature selection and iris pattern classification. In the first phase, we propose an iris recognition scheme based on GAs and asymmetrical SVMs. Instead of using the whole iris region, we elicit the iris information between the collarette and the pupil boundary to suppress the effects of eyelid and eyelash occlusions and to minimize the matching error. In the second phase, we process the nonideal iris images that are captured in unconstrained situations and those affected by several nonideal factors. The proposed noncooperative iris recognition method is further divided into three approaches. In the first approach of the second phase, we apply active contour-based curve evolution approaches to segment the inner/outer boundaries accurately from the nonideal iris images. The proposed active contour-based approaches show a reasonable performance when the iris/sclera boundary is separated by a blurred boundary. In the second approach, we describe a new iris segmentation scheme using GT to elicit iris/pupil boundary from a nonideal iris image. We apply a parallel game-theoretic decision making procedure by modifying Chakraborty and Duncan's algorithm to form a unified approach, which is robust to noise and poor localization and less affected by weak iris/sclera boundary. Finally, to further improve the segmentation performance, we propose a variational model to localize the iris region belonging to the given shape space using active contour method, a geometric shape prior and the Mumford-Shah functional. The verification and identification performance of the proposed scheme is validated using four challenging nonideal iris datasets, namely, the ICE 2005, the UBIRIS Version 1, the CASIA Version 3 Interval, and the WVU Nonideal, plus the non-homogeneous combined dataset. We have conducted several sets of experiments and finally, the proposed approach has achieved a Genuine Accept Rate (GAR) of 97.34% on the combined dataset at the fixed False Accept Rate (FAR) of 0.001% with an Equal Error Rate (EER) of 0.81%. The highest Correct Recognition Rate (CRR) obtained by the proposed iris recognition system is 97.39%

    Image Pre-processing Algorithms for Detection of Small/Point Airborne Targets

    Get PDF
    The problem of detecting small/point targets in infra-red imagery is an important research area for defence applications. The challenge is to achieve high sensitivity for detection of dim point like small targets with low false alarms and high detection probability. To detect the target in such scenario, pre-processing algorithms are used to predict the complex background and then to subtract predicted background from the original image. The difference image is passed to the detection algorithm to further distinguish between target and background and/or noise. The aim of the study is to fit the background as closely as possible in the original image without diminishing the target signal. A number of pre-processing algorithms (spatial, temporal and spatio-temporal) have been reported in the literature. In this paper a survey of different pre-processing algorithm is presented. An improved hybrid morphological filter, which provides high gain in signal-to-noise plus clutter ratio (SCNR), has been proposed for detection of small/point targets.Defence Science Journal, 2009, 59(2), pp.166-174, DOI:http://dx.doi.org/10.14429/dsj.59.150

    Semantic Analysis of Facial Gestures from Video Using a Bayesian Framework

    Get PDF
    The continuous growth of video technology has resulted in increased research into the semantic analysis of video. The multimodal property of the video has made this task very complex. The objective of this thesis was to research, implement and examine the underlying methods and concepts of semantic analysis of videos and improve upon the state of the art in automated emotion recognition by using semantic knowledge in the form of Bayesian inference. The main domain of analysis is facial emotion recognition from video, including both visual and vocal aspects of facial gestures. The goal is to determine if an expression on a person\u27s face in a sequence of video frames is happy, sad, angry, fearful or disgusted. A Bayesian network classification algorithm was designed and used to identify and understand facial expressions in video. The Bayesian network is an attractive choice because it provides a probabilistic environment and gives information about uncertainty from knowledge about the domain. This research contributes to current knowledge in two ways: by providing a novel algorithm that uses edge differences to extract keyframes in video and facial features from the keyframe, and by testing the hypothesis that combining two modalities (vision with speech) yields a better classification result (low false positive rate and high true positive rate) than either modality used alone
    corecore