29 research outputs found

    Wavelet-Based Enhancement Technique for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of color digital images based on wavelet transform domain are investigated in this dissertation research. In this research, a novel, fast and robust wavelet-based dynamic range compression and local contrast enhancement (WDRC) algorithm to improve the visibility of digital images captured under non-uniform lighting conditions has been developed. A wavelet transform is mainly used for dimensionality reduction such that a dynamic range compression with local contrast enhancement algorithm is applied only to the approximation coefficients which are obtained by low-pass filtering and down-sampling the original intensity image. The normalized approximation coefficients are transformed using a hyperbolic sine curve and the contrast enhancement is realized by tuning the magnitude of the each coefficient with respect to surrounding coefficients. The transformed coefficients are then de-normalized to their original range. The detail coefficients are also modified to prevent edge deformation. The inverse wavelet transform is carried out resulting in a lower dynamic range and contrast enhanced intensity image. A color restoration process based on the relationship between spectral bands and the luminance of the original image is applied to convert the enhanced intensity image back to a color image. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some pathological scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for tackling the color constancy problem. The illuminant is modeled having an effect on the image histogram as a linear shift and adjust the image histogram to discount the illuminant. The WDRC algorithm is then applied with a slight modification, i.e. instead of using a linear color restoration, a non-linear color restoration process employing the spectral context relationships of the original image is applied. The proposed technique solves the color constancy issue and the overall enhancement algorithm provides attractive results improving visibility even for scenes with near-zero visibility conditions. In this research, a new wavelet-based image interpolation technique that can be used for improving the visibility of tiny features in an image is presented. In wavelet domain interpolation techniques, the input image is usually treated as the low-pass filtered subbands of an unknown wavelet-transformed high-resolution (HR) image, and then the unknown high-resolution image is produced by estimating the wavelet coefficients of the high-pass filtered subbands. The same approach is used to obtain an initial estimate of the high-resolution image by zero filling the high-pass filtered subbands. Detail coefficients are estimated via feeding this initial estimate to an undecimated wavelet transform (UWT). Taking an inverse transform after replacing the approximation coefficients of the UWT with initially estimated HR image, results in the final interpolated image. Experimental results of the proposed algorithms proved their superiority over the state-of-the-art enhancement and interpolation techniques

    DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

    Get PDF
    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM

    Efficient Computer-Aided Techniques to Detect Glaucoma

    Get PDF
    A survey of the World Health Organization has revealed that retinal eye disease Glaucoma is the second leading cause for blindness worldwide. It is a disease which will steals the vision of the patient without any warning or symptoms. About half of the World Glaucoma Patients are estimated to be in Asia. Hence, for social and economic reasons, Glaucoma detection is necessary in preventing blindness and reducing the cost of surgical treatment of the disease. The objective of the chapter is to predict and detect Glaucoma efficiently using image processing techniques. We have developed an efficient computer-aided Glaucoma detection system to classify a fundus image as either normal or glaucomatous image based on the structural features of the fundus image such as cup-to-disc ratio (CDR), rim-to-disc ratio (RDR), superior and inferior neuroretinal rim thicknesses, vessel structure-based features, and distribution of texture features in the fundus images. An automated clinical support system is developed to assist the ophthalmologists to identify the persons who are at risk in the early stages of the disease, monitor the progression of the disease, and minimize the examination time

    Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators

    No full text
    International audienceNonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Detecting microcalcification clusters in digital mammograms: Study for inclusion into computer aided diagnostic prompting system

    Full text link
    Among signs of breast cancer encountered in digital mammograms radiologists point to microcalcification clusters (MCCs). Their detection is a challenging problem from both medical and image processing point of views. This work presents two concurrent methods for MCC detection, and studies their possible inclusion to a computer aided diagnostic prompting system. One considers Wavelet Domain Hidden Markov Tree (WHMT) for modeling microcalcification edges. The model is used for differentiation between MC and non-MC edges based on the weighted maximum likelihood (WML) values. The classification of objects is carried out using spatial filters. The second method employs SUSAN edge detector in the spatial domain for mammogram segmentation. Classification of objects as calcifications is carried out using another set of spatial filters and Feedforward Neural Network (NN). A same distance filter is employed in both methods to find true clusters. The analysis of two methods is performed on 54 image regions from the mammograms selected randomly from DDSM database, including benign and cancerous cases as well as cases which can be classified as hard cases from both radiologists and the computer perspectives. WHMT/WML is able to detect 98.15% true positive (TP) MCCs under 1.85% of false positives (FP), whereas the SUSAN/NN method achieves 94.44% of TP at the cost of 1.85% for FP. The comparison of these two methods suggests WHMT/WML for the computer aided diagnostic prompting. It also certifies the low false positive rates for both methods, meaning less biopsy tests per patient

    A Hybrid Particle Swarm Optimization with Affine Transformation Approach for Cloud Free Multi-Temporal Image Registration

    Get PDF
    An image registration is the major part of the image categorization and cluster formation in multi temporal image processing. The images are affected by the different factors such as cloud shadow, water level, building shadows etc. In this paper, an enhanced registration process and the cloud removal technique is proposed for image enhancement. The Daemons, Combined Registration and Segmentation (CRS) approach, Markov Random Field (MRF) approach and Mutual Information (MI) based approaches results in more computational complexity, minimum edge preservation measure (QAB/F) and Mutual Information in image registration. In order to maximize the quality of edge preservation measure and MI with minimum computational time, this paper proposes Particle Swarm Optimization (PSO) based affine transformation technique. The proposed techniques measure and compare the computation time against the number of pixels of an image with the existing methods of CRS and MRF for the number of images. The comparative analysis of QAB/F and MI with the traditional methods of Clock Point -Least Square (CP-LS) and the Multi-Focus Image Fusion (MFIF) and Discrete Wavelet Transform (DWT) is presented to confirm the effective performance. The simulation results of the proposed transformation for registration process confirms the effective image registration in the multi-temporal image processing

    Dynamic neural network architecture inspired by the immune algorithm to predict preterm deliveries in pregnant women

    Get PDF
    There has been some improvement in the treatment of preterm infants, which has helped to increase their chance of survival. However, the rate of premature births is still globally increasing. As a result, this group of infants is most at risk of developing severe medical conditions that can affect the respiratory, gastrointestinal, immune, central nervous, auditory and visual systems. There is a strong body of evidence emerging that suggests the analysis of uterine electrical signals, from the abdominal surface (Electrohysterography – EHG), could provide a viable way of diagnosing true labour and even predict preterm deliveries. This paper explores this idea further and presents a new dynamic self-organized network immune algorithm that classifies term and preterm records, using an open dataset containing 300 records (38 preterm and 262 term). Using the dataset, oversampling and cross validation techniques are evaluated against other similar studies. The proposed approach shows an improvement on existing studies with 89% sensitivity, 91% specificity, 90% positive predicted value, 90% negative predicted value, and an overall accuracy of 90%

    Development and implementation of image fusion algorithms based on wavelets

    Get PDF
    Image fusion is a process of blending the complementary as well as the common features of a set of images, to generate a resultant image with superior information content in terms of subjective as well as objective analysis point of view. The objective of this research work is to develop some novel image fusion algorithms and their applications in various fields such as crack detection, multi spectra sensor image fusion, medical image fusion and edge detection of multi-focus images etc. The first part of this research work deals with a novel crack detection technique based on Non-Destructive Testing (NDT) for cracks in walls suppressing the diversity and complexity of wall images. It follows different edge tracking algorithms such as Hyperbolic Tangent (HBT) filtering and canny edge detection algorithm. The second part of this research work deals with a novel edge detection approach for multi-focused images by means of complex wavelets based image fusion. An illumination invariant hyperbolic tangent filter (HBT) is applied followed by an adaptive thresholding to get the real edges. The shift invariance and directionally selective diagonal filtering as well as the ease of implementation of Dual-Tree Complex Wavelet Transform (DT-CWT) ensure robust sub band fusion. It helps in avoiding the ringing artefacts that are more pronounced in Discrete Wavelet Transform (DWT). The fusion using DT-CWT also solves the problem of low contrast and blocking effects. In the third part, an improved DT-CWT based image fusion technique has been developed to compose a resultant image with better perceptual as well as quantitative image quality indices. A bilateral sharpness based weighting scheme has been implemented for the high frequency coefficients taking both gradient and its phase coherence in accoun
    corecore