3,137 research outputs found

    Gray Image extraction using Fuzzy Logic

    Full text link
    Fuzzy systems concern fundamental methodology to represent and process uncertainty and imprecision in the linguistic information. The fuzzy systems that use fuzzy rules to represent the domain knowledge of the problem are known as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and subsequent extraction from a noise-affected background, with the help of various soft computing methods, are relatively new and quite popular due to various reasons. These methods include various Artificial Neural Network (ANN) models (primarily supervised in nature), Genetic Algorithm (GA) based techniques, intensity histogram based methods etc. providing an extraction solution working in unsupervised mode happens to be even more interesting problem. Literature suggests that effort in this respect appears to be quite rudimentary. In the present article, we propose a fuzzy rule guided novel technique that is functional devoid of any external intervention during execution. Experimental results suggest that this approach is an efficient one in comparison to different other techniques extensively addressed in literature. In order to justify the supremacy of performance of our proposed technique in respect of its competitors, we take recourse to effective metrics like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR).Comment: 8 pages, 5 figures, Fuzzy Rule Base, Image Extraction, Fuzzy Inference System (FIS), Membership Functions, Membership values,Image coding and Processing, Soft Computing, Computer Vision Accepted and published in IEEE. arXiv admin note: text overlap with arXiv:1206.363

    Optimising the signal-to-noise ratio in measurement of photon pairs with detector arrays

    Full text link
    To evidence multimode spatial entanglement of spontaneous down-conversion, detector arrays allow a full field measurement, without any a priori selection of the paired photons. We show by comparing results of the recent literature that electron-multiplying CCD (EMCCD) cameras allow, in the present state of technology, the detection of quantum correlations with the best signal-to-noise ratio (SNR), while intensified CCD (ICCD) cameras allow at best to identify pairs. The SNR appears to be proportional to the square root of the number of coherence cells in each image, or Schmidt number. Then, corrected estimates are derived for extended coherence cells and not very low and not space-stationary photon fluxes. Finally, experimental measurements of the SNR confirm our model

    A Multiscale Approach for Statistical Characterization of Functional Images

    Get PDF
    Increasingly, scientific studies yield functional image data, in which the observed data consist of sets of curves recorded on the pixels of the image. Examples include temporal brain response intensities measured by fMRI and NMR frequency spectra measured at each pixel. This article presents a new methodology for improving the characterization of pixels in functional imaging, formulated as a spatial curve clustering problem. Our method operates on curves as a unit. It is nonparametric and involves multiple stages: (i) wavelet thresholding, aggregation, and Neyman truncation to effectively reduce dimensionality; (ii) clustering based on an extended EM algorithm; and (iii) multiscale penalized dyadic partitioning to create a spatial segmentation. We motivate the different stages with theoretical considerations and arguments, and illustrate the overall procedure on simulated and real datasets. Our method appears to offer substantial improvements over monoscale pixel-wise methods. An Appendix which gives some theoretical justifications of the methodology, computer code, documentation and dataset are available in the online supplements

    Quantitative magnetic resonance image analysis via the EM algorithm with stochastic variation

    Full text link
    Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight into pathological and physiological alterations of living tissue, with the help of which researchers hope to predict (local) therapeutic efficacy early and determine optimal treatment schedule. However, the analysis of qMRI has been limited to ad-hoc heuristic methods. Our research provides a powerful statistical framework for image analysis and sheds light on future localized adaptive treatment regimes tailored to the individual's response. We assume in an imperfect world we only observe a blurred and noisy version of the underlying pathological/physiological changes via qMRI, due to measurement errors or unpredictable influences. We use a hidden Markov random field to model the spatial dependence in the data and develop a maximum likelihood approach via the Expectation--Maximization algorithm with stochastic variation. An important improvement over previous work is the assessment of variability in parameter estimation, which is the valid basis for statistical inference. More importantly, we focus on the expected changes rather than image segmentation. Our research has shown that the approach is powerful in both simulation studies and on a real dataset, while quite robust in the presence of some model assumption violations.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS157 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Impact of Different Image Thresholding based Mammogram Image Segmentation- A Review

    Get PDF
    Images are examined and discretized numerical capacities. The goal of computerized image processing is to enhance the nature of pictorial data and to encourage programmed machine elucidation. A computerized imaging framework ought to have fundamental segments for picture procurement, exceptional equipment for encouraging picture applications, and a tremendous measure of memory for capacity and info/yield gadgets. Picture segmentation is the field broadly scrutinized particularly in numerous restorative applications and still offers different difficulties for the specialists. Segmentation is a critical errand to recognize districts suspicious of tumor in computerized mammograms. Every last picture have distinctive sorts of edges and diverse levels of limits. In picture transforming, the most regularly utilized strategy as a part of extricating articles from a picture is "thresholding". Thresholding is a prevalent device for picture segmentation for its straightforwardness, particularly in the fields where ongoing handling is required

    The Impact of Different Image Thresholding based Mammogram Image Segmentation- A Review

    Get PDF
    Images are examined and discretized numerical capacities. The goal of computerized image processing is to enhance the nature of pictorial data and to encourage programmed machine elucidation. A computerized imaging framework ought to have fundamental segments for picture procurement, exceptional equipment for encouraging picture applications, and a tremendous measure of memory for capacity and info/yield gadgets. Picture segmentation is the field broadly scrutinized particularly in numerous restorative applications and still offers different difficulties for the specialists. Segmentation is a critical errand to recognize districts suspicious of tumor in computerized mammograms. Every last picture have distinctive sorts of edges and diverse levels of limits. In picture transforming, the most regularly utilized strategy as a part of extricating articles from a picture is "thresholding". Thresholding is a prevalent device for picture segmentation for its straightforwardness, particularly in the fields where ongoing handling is required

    Pathwise coordinate optimization

    Full text link
    We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1L_1-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fused lasso,'' however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS131 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore