436 research outputs found

    Towards better performance: phase congruency based face recognition

    Get PDF
    Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers:  Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental  results show promising recognition rates with a reasonable matching time

    Curvilinear Structure Enhancement in Biomedical Images

    Get PDF
    Curvilinear structures can appear in many different areas and at a variety of scales. They can be axons and dendrites in the brain, blood vessels in the fundus, streets, rivers or fractures in buildings, and others. So, it is essential to study curvilinear structures in many fields such as neuroscience, biology, and cartography regarding image processing. Image processing is an important field for the help to aid in biomedical imaging especially the diagnosing the disease. Image enhancement is the early step of image analysis. In this thesis, I focus on the research, development, implementation, and validation of 2D and 3D curvilinear structure enhancement methods, recently established. The proposed methods are based on phase congruency, mathematical morphology, and tensor representation concepts. First, I have introduced a 3D contrast independent phase congruency-based enhancement approach. The obtained results demonstrate the proposed approach is robust against the contrast variations in 3D biomedical images. Second, I have proposed a new mathematical morphology-based approach called the bowler-hat transform. In this approach, I have combined the mathematical morphology with a local tensor representation of curvilinear structures in images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. Especially the proposed method is quite successful while enhancing of curvilinear structures at junctions. Finally, I have extended the bowler-hat approach to the 3D version to prove the applicability, reliability, and ability of it in 3D

    ENHANCEMENT ANALYSIS OF IMMUNE FLUORESCENT CELL IMAGES

    Get PDF
    There are different patterns of immune fluorescence cells, which serve in determining different autoimmune disease. Hence, clearly identifying the features of the figures in the image will assist in automating the classification of these patterns. This project aims to enhance the quality of the Hep2-cell images obtained from Indirect Immune Fluorescence (IIF) Test. The enhancement of the quality in this project will be focused on enhancing the contrast, reducing the noise, and sharpening the edges of images. This enhancement will have a real serious impact on the stages coming after, which are patterns recognition and automatic classification. Creating an automatic battern classification system will improve the diagnostic process of the autoimmune disease instead of handling it manually. Consequently, many disadvantages of the manual interpretation can be overcome, such as level of expertise, time consuming and prone to mistakes. This research analyzed the performance of three enhancement approaches namely wavelet transform filter, diffusion filter, and wavelet transform filter combined with diffusion filter. The combination of wavelet transform filter with diffusion filter produced better result. However, the diffusion filter produced best result among all the three enhancement approach of the indirect immune fluorescence images. The recommendation for the future work is to explore an automatic determination of noise variance in the image when wavelet transform filter is being applied

    Contour Based 3D Biological Image Reconstruction and Partial Retrieval

    Get PDF
    Image segmentation is one of the most difficult tasks in image processing. Segmentation algorithms are generally based on searching a region where pixels share similar gray level intensity and satisfy a set of defined criteria. However, the segmented region cannot be used directly for partial image retrieval. In this dissertation, a Contour Based Image Structure (CBIS) model is introduced. In this model, images are divided into several objects defined by their bounding contours. The bounding contour structure allows individual object extraction, and partial object matching and retrieval from a standard CBIS image structure. The CBIS model allows the representation of 3D objects by their bounding contours which is suitable for parallel implementation particularly when extracting contour features and matching them for 3D images require heavy computations. This computational burden becomes worse for images with high resolution and large contour density. In this essence we designed two parallel algorithms; Contour Parallelization Algorithm (CPA) and Partial Retrieval Parallelization Algorithm (PRPA). Both algorithms have considerably improved the performance of CBIS for both contour shape matching as well as partial image retrieval. To improve the effectiveness of CBIS in segmenting images with inhomogeneous backgrounds we used the phase congruency invariant features of Fourier transform components to highlight boundaries of objects prior to extracting their contours. The contour matching process has also been improved by constructing a fuzzy contour matching system that allows unbiased matching decisions. Further improvements have been achieved through the use of a contour tailored Fourier descriptor to make translation and rotation invariance. It is proved to be suitable for general contour shape matching where translation, rotation, and scaling invariance are required. For those images which are hard to be classified by object contours such as bacterial images, we define a multi-level cosine transform to extract their texture features for image classification. The low frequency Discrete Cosine Transform coefficients and Zenike moments derived from images are trained by Support Vector Machine (SVM) to generate multiple classifiers

    Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    Get PDF
    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    From filters to features:Scale-space analysis of edge and blur coding in human vision

    Get PDF
    To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO
    • …
    corecore