486 research outputs found

    An Adaptive Semi-Parametric and Context-Based Approach to Unsupervised Change Detection in Multitemporal Remote-Sensing Images

    Get PDF
    In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semi-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov Random Field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semi-parametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    Edges Detection Based On Renyi Entropy with Split/Merge

    Get PDF
    Most of the classical methods for edge detection are based on the first and second order derivatives of gray levels of the pixels of the original image. These processes give rise to the exponential increment of computational time, especially with large size of images, and therefore requires more time for processing. This paper shows the new algorithm based on both the Rényi entropy and the Shannon entropy together for edge detection using split and merge technique. The objective is to find the best edge representation and decrease the computation time. A set of experiments in the domain of edge detection are presented. The system yields edge detection performance comparable to the classic methods, such as Canny, LOG, and Sobel.  The experimental results show that the effect of this method is better to LOG, and Sobel methods. In addition, it is better to other three methods in CPU time. Another benefit comes from easy implementation of this method. Keywords: Rényi Entropy, Information content, Edge detection, Thresholdin

    Statistical Model of Shape Moments with Active Contour Evolution for Shape Detection and Segmentation

    Get PDF
    This paper describes a novel method for shape representation and robust image segmentation. The proposed method combines two well known methodologies, namely, statistical shape models and active contours implemented in level set framework. The shape detection is achieved by maximizing a posterior function that consists of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model is built as a result of a learning process based on nonparametric probability estimation in a PCA reduced feature space formed by the Legendre moments of training silhouette images. A greedy strategy is applied to optimize the proposed cost function by iteratively evolving an implicit active contour in the image space and subsequent constrained optimization of the evolved shape in the reduced shape feature space. Experimental results presented in the paper demonstrate that the proposed method, contrary to many other active contour segmentation methods, is highly resilient to severe random and structural noise that could be present in the data

    Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    Get PDF
    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described

    Image denoising with unsupervised, information-theoretic, adaptive filtering

    Get PDF
    technical reportThe problem of denoising images is one of the most important and widely studied problems in image processing and computer vision. Various image filtering strategies based on linear systems, statistics, information theory, and variational calculus, have been effective, but invariably make strong assumptions about the properties of the signal and/or noise. Therefore, they lack the generality to be easily applied to new applications or diverse image collections. This paper describes a novel unsupervised, information-theoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing the joint entropy between them. In this way UINTA automatically discovers the statistical properties of the signal and can thereby reduce noise in a wide spectrum of images and applications. The paper describes the formulation required to minimize the joint entropy measure, presents several important practical considerations in estimating image-region statistics, and then presents a series of results and comparisons on both real and synthetic data

    Nonparametric neighborhood statistics for MRI denoising

    Get PDF
    technical reportThis paper presents a novel method for denoising MR images that relies on an optimal estimation, combining a likelihood model with an adaptive image prior. The method models images as random fields and exploits the properties of independent Rician noise to learn the higher-order statistics of image neighborhoods from corrupted input data. It uses these statistics as priors within a Bayesian denoising framework. This paper presents an information-theoretic method for characterizing neighborhood structure using nonparametric density estimation. The formulation generalizes easily to simultaneous denoising of multimodal MRI, exploiting the relationships between modalities to further enhance performance. The method, relying on the information content of input data for noise estimation and setting important parameters, does not require significant parameter tuning. Qualitative and quantitative results on real, simulated, and multimodal data, including comparisons with other approaches, demonstrate the effectiveness of the method

    Shape-driven segmentation of the arterial wall in intravascular ultrasound images

    Get PDF
    Segmentation of arterial wall boundaries from intravascular images is an important problem for many applications in the study of plaque characteristics, mechanical properties of the arterial wall, its 3D reconstruction, and its measurements such as lumen size, lumen radius, and wall radius. We present a shape-driven approach to segmentation of the arterial wall from intravascular ultrasound images in the rectangular domain. In a properly built shape space using training data, we constrain the lumen and media-adventitia contours to a smooth, closed geometry, which increases the segmentation quality without any tradeoff with a regularizer term. In addition to a shape prior, we utilize an intensity prior through a non-parametric probability density based image energy, with global image measurements rather than pointwise measurements used in previous methods. Furthermore, a detection step is included to address the challenges introduced to the segmentation process by side branches and calcifications. All these features greatly enhance our segmentation method. The tests of our algorithm on a large dataset demonstrate the effectiveness of our approach

    Coronary Artery Calcium Quantification in Contrast-enhanced Computed Tomography Angiography

    Get PDF
    Coronary arteries are the blood vessels supplying oxygen-rich blood to the heart muscles. Coronary artery calcium (CAC), which is the total amount of calcium deposited in these arteries, indicates the presence or the future risk of coronary artery diseases. Quantification of CAC is done by using computed tomography (CT) scan which uses attenuation of x-ray by different tissues in the body to generate three-dimensional images. Calcium can be easily spotted in the CT images because of its higher opacity to x-ray compared to that of the surrounding tissue. However, the arteries cannot be identified easily in the CT images. Therefore, a second scan is done after injecting a patient with an x-ray opaque dye known as contrast material which makes different chambers of the heart and the coronary arteries visible in the CT scan. This procedure is known as computed tomography angiography (CTA) and is performed to assess the morphology of the arteries in order to rule out any blockage in the arteries. The CT scan done without the use of contrast material (non-contrast-enhanced CT) can be eliminated if the calcium can be quantified accurately from the CTA images. However, identification of calcium in CTA images is difficult because of the proximity of the calcium and the contrast material and their overlapping intensity range. In this dissertation first we compare the calcium quantification by using a state-of-the-art non-contrast-enhanced CT scan method to conventional methods suggesting optimal quantification parameters. Then we develop methods to accurately quantify calcium from the CTA images. The methods include novel algorithms for extracting centerline of an artery, calculating the threshold of calcium adaptively based on the intensity of contrast along the artery, calculating the amount of calcium in mixed intensity range, and segmenting the artery and the outer wall. The accuracy of the calcium quantification from CTA by using our methods is higher than the non-contrast-enhanced CT thus potentially eliminating the need of the non-contrast-enhanced CT scan. The implications are that the total time required for the CT scan procedure, and the patient\u27s exposure to x-ray radiation are reduced
    • …
    corecore