3 research outputs found

    Divide-and-conquer framework for image restoration and enhancement

    Get PDF
    Abstract(#br)We develop a novel divide-and-conquer framework for image restoration and enhancement based on their task-driven requirements, which takes advantage of visual importance differences of image contents (i.e., noise versus image, edge-based structures versus smoothing areas, high-frequency versus low-frequency components) and sparse prior differences of image contents for performance improvements. The proposed framework is efficient in implementation of decomposition-processing-integration. An observed image is first decomposed into different subspaces based on considering visual importance of different subspaces and exploiting their prior differences. Different models are separately established for image subspace restoration and enhancement, and existing image restoration and enhancement methods are utilized to deal with them effectively. Then a simple but effective fusion scheme with different weights is used to integrate the post-processed subspaces for the final reconstructed image. Final experimental results demonstrate that the proposed divide-and-conquer framework outperforms several restoration and enhancement algorithms in both subjective results and objective assessments. The performance improvements of image restoration and enhancement can be yielded by using the proposed divide-and-conquer strategy, which greatly benefits in terms of mixed Gaussian and salt-and-pepper noise removal, non-blind deconvolution, and image enhancement. In addition, our divide-and-conquer framework can be simply extensible to other restoration and enhancement algorithms, and can be a new way to promote their performances for image restoration and enhancement

    Local Geometric Transformations in Image Analysis

    Get PDF
    The characterization of images by geometric features facilitates the precise analysis of the structures found in biological micrographs such as cells, proteins, or tissues. In this thesis, we study image representations that are adapted to local geometric transformations such as rotation, translation, and scaling, with a special emphasis on wavelet representations. In the first part of the thesis, our main interest is in the analysis of directional patterns and the estimation of their location and orientation. We explore steerable representations that correspond to the notion of rotation. Contrarily to classical pattern matching techniques, they have no need for an a priori discretization of the angle and for matching the filter to the image at each discretized direction. Instead, it is sufficient to apply the filtering only once. Then, the rotated filter for any arbitrary angle can be determined by a systematic and linear transformation of the initial filter. We derive the Cramér-Rao bounds for steerable filters. They allow us to select the best harmonics for the design of steerable detectors and to identify their optimal radial profile. We propose several ways to construct optimal representations and to build powerful and effective detector schemes; in particular, junctions of coinciding branches with local orientations. The basic idea of local transformability and the general principles that we utilize to design steerable wavelets can be applied to other geometric transformations. Accordingly, in the second part, we extend our framework to other transformation groups, with a particular interest in scaling. To construct representations in tune with a notion of local scale, we identify the possible solutions for scalable functions and give specific criteria for their applicability to wavelet schemes. Finally, we propose discrete wavelet frames that approximate a continuous wavelet transform. Based on these results, we present a novel wavelet-based image-analysis software that provides a fast and automatic detection of circular patterns, combined with a precise estimation of their size

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore