400 research outputs found

    Color and Texture Feature Extraction Using Gabor Filter - Local Binary Patterns for Image Segmentation with Fuzzy C-Means

    Full text link
    Image segmentation to be basic for image analysis and recognition process. Segmentation divides the image into several regions based on the unique homogeneous image pixel. Image segmentation classify homogeneous pixels basedon several features such as color, texture and others. Color contains a lot of information and human vision can see thousands of color combinations and intensity compared with grayscale or with black and white (binary). The method is easy to implement to segementation is clustering method such as the Fuzzy C-Means (FCM) algorithm. Features to beextracted image is color and texture, to use the color vector L* a* b* color space and to texture using Gabor filters. However, Gabor filters have poor performance when the image is segmented many micro texture, thus affecting the accuracy of image segmentation. As support in improving the accuracy of the extracted micro texture used method of Local Binary Patterns (LBP). Experimental use of color features compared with grayscales increased 16.54% accuracy rate for texture Gabor filters and 14.57% for filter LBP. While the LBP texture features can help improve the accuracy of image segmentation, although small at 2% on a grayscales and 0.05% on the color space L* a* b*

    Fully 3D Implementation of the End-to-end Deep Image Prior-based PET Image Reconstruction Using Block Iterative Algorithm

    Full text link
    Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function. To implement a practical fully 3D PET image reconstruction, which could not be performed due to a graphics processing unit memory limitation, we modify the DIP optimization to block-iteration and sequentially learn an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term was added to the loss function to enhance the quantitative PET image accuracy. We evaluated our proposed method using Monte Carlo simulation with [18^{18}F]FDG PET data of a human brain and a preclinical study on monkey brain [18^{18}F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximum-a-posterior EM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that the proposed method improved the PET image quality by reducing statistical noise and preserved a contrast of brain structures and inserted tumor compared with other algorithms. In the preclinical experiment, finer structures and better contrast recovery were obtained by the proposed method. This indicated that the proposed method can produce high-quality images without a prior training dataset. Thus, the proposed method is a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.Comment: 9 pages, 10 figure

    Context-based segmentation of image sequences

    Full text link

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    New History Matching Methodology for Two Phase Reservoir Using Expectation-Maximization (EM) Algorithm

    Get PDF
    The Expectation-Maximization (EM) Algorithm is a well-known method for estimating maximum likelihood and can be used to find missing numbers in an array. The EM Algorithm has been used extensively in Electrical and Electronics Engineering as well as in the Biometrics industries for image processing but very little use of the EM Algorithm has been seen in the Oil and Gas industry, especially for History Matching. History matching is a non-unique matching of oil rate, water rate, gas rate and bottom hole pressure data of a producing well (known as Producer) as well as the bottom hole pressure and liquid injection of an injecting well (known as Injector) by adjusting reservoir parameters such as permeability, porosity, Corey exponents, compressibility factor, and other pertinent reservoir parameters. EM Algorithm is a statistical method that guarantees convergence and is particularly useful when the likelihood function is a member of the exponential family. On the other hand, EM algorithm can be slow to converge, and may converge to a local optimum of the observed data log likelihood function, depending on the starting values. In this research, our objective is to develop an algorithm that can be used to successfully match the historical production data given sparse field data. Our approach will be to update the permeability multiplier, thereby updating the permeability of each unobserved grid cell that contributes to the production at one or more producing wells. The EM algorithm will be utilized to optimize the permeability multiplier of each contributing unobserved grid cell

    Survey of contemporary trends in color image segmentation

    Full text link

    Computational methods to predict and enhance decision-making with biomedical data.

    Get PDF
    The proposed research applies machine learning techniques to healthcare applications. The core ideas were using intelligent techniques to find automatic methods to analyze healthcare applications. Different classification and feature extraction techniques on various clinical datasets are applied. The datasets include: brain MR images, breathing curves from vessels around tumor cells during in time, breathing curves extracted from patients with successful or rejected lung transplants, and lung cancer patients diagnosed in US from in 2004-2009 extracted from SEER database. The novel idea on brain MR images segmentation is to develop a multi-scale technique to segment blood vessel tissues from similar tissues in the brain. By analyzing the vascularization of the cancer tissue during time and the behavior of vessels (arteries and veins provided in time), a new feature extraction technique developed and classification techniques was used to rank the vascularization of each tumor type. Lung transplantation is a critical surgery for which predicting the acceptance or rejection of the transplant would be very important. A review of classification techniques on the SEER database was developed to analyze the survival rates of lung cancer patients, and the best feature vector that can be used to predict the most similar patients are analyzed

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results
    • …
    corecore