45,513 research outputs found

    Identification of body fat tissues in MRI data

    Get PDF
    In recent years non-invasive medical diagnostic techniques have been used widely in medical investigations. Among the various imaging modalities available, Magnetic Resonance Imaging is very attractive as it produces multi-slice images where the contrast between various types of body tissues such as muscle, ligaments and fat is well defined. The aim of this paper is to describe the implementation of an unsupervised image analysis algorithm able to identify the body fat tissues from a sequence of MR images encoded in DICOM format. The developed algorithm consists of three main steps. The first step pre-processes the MR images in order to reduce the level of noise. The second step extracts the image areas representing fat tissues by using an unsupervised clustering algorithm. Finally, image refinements are applied to reclassify the pixels adjacent to the initial fat estimate and to eliminate outliers. The experimental data indicates that the proposed implementation returns accurate results and furthermore is robust to noise and to greyscale in-homogeneity

    Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    Full text link
    We study the continuity of an abstract generalization of the maximum-entropy inference - a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3×33\times 3 matrices.Comment: 27 page

    Gravitational Lensing by Cold Dark Matter Catastrophes

    Full text link
    Intrinsically cold particle dark matter inevitably creates halos with sharp discontinuities in projected surface density caused by the projection of fold catastrophes onto the sky. In principle, these imperfections can be detected and measured with gravitational lensing through discontinuities in image magnification and image structure. Lens solutions are discussed for the most common universal classes of discontinuities. Edges caused by cold particles such as condensed axions and thermal WIMPs are very sharp, respectively about 101210^{-12} and 10710^{-7} of the halo scale. Their structure can be resolved by stellar and quasi-stellar sources which show sudden changes in brightness or even sudden disappearances (sometimes within hours) as edges are crossed. Images of extended objects such as edge-on galaxies or jets can show sudden bends at an edge, or stretched, parity-inverted reflection symmetry about a sharp line. Observational strategies and prospects are briefly discussed.Comment: 9 pages, AASTeX. Final version, with explanatory figure added, to be published in the Astrophysical Journa

    Does median filtering truly preserve edges better than linear filtering?

    Full text link
    Image processing researchers commonly assert that "median filtering is better than linear filtering for removing noise in the presence of edges." Using a straightforward large-nn decision-theory framework, this folk-theorem is seen to be false in general. We show that median filtering and linear filtering have similar asymptotic worst-case mean-squared error (MSE) when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal. To see dramatic benefits of median smoothing in an asymptotic setting, the per-pixel noise level should tend to zero (i.e., SNR should grow very large). We show that a two-stage median filtering using two very different window widths can dramatically outperform traditional linear and median filtering in settings where the underlying object has edges. In this two-stage procedure, the first pass, at a fine scale, aims at increasing the SNR. The second pass, at a coarser scale, correctly exploits the nonlinearity of the median. Image processing methods based on nonlinear partial differential equations (PDEs) are often said to improve on linear filtering in the presence of edges. Such methods seem difficult to analyze rigorously in a decision-theoretic framework. A popular example is mean curvature motion (MCM), which is formally a kind of iterated median filtering. Our results on iterated median filtering suggest that some PDE-based methods are candidates to rigorously outperform linear filtering in an asymptotic framework.Comment: Published in at http://dx.doi.org/10.1214/08-AOS604 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Robust Phase Unwrapping by Convex Optimization

    Full text link
    The 2-D phase unwrapping problem aims at retrieving a "phase" image from its modulo 2π2\pi observations. Many applications, such as interferometry or synthetic aperture radar imaging, are concerned by this problem since they proceed by recording complex or modulated data from which a "wrapped" phase is extracted. Although 1-D phase unwrapping is trivial, a challenge remains in higher dimensions to overcome two common problems: noise and discontinuities in the true phase image. In contrast to state-of-the-art techniques, this work aims at simultaneously unwrap and denoise the phase image. We propose a robust convex optimization approach that enforces data fidelity constraints expressed in the corrupted phase derivative domain while promoting a sparse phase prior. The resulting optimization problem is solved by the Chambolle-Pock primal-dual scheme. We show that under different observation noise levels, our approach compares favorably to those that perform the unwrapping and denoising in two separate steps.Comment: 6 pages, 4 figures, submitted in ICIP1

    Local Stereo Matching Using Adaptive Local Segmentation

    Get PDF
    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The preprocessing step smoothes low textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences; and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the fronto-parallel assumption, our algorithm is the best ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face
    corecore