26 research outputs found

    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation

    Full text link
    In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.Comment: 9 pages, 4 figures, Accepted to 3D

    Susceptibility of texture measures to noise: an application to lung tumor CT images

    Get PDF
    Five different texture methods are used to investigate their susceptibility to subtle noise occurring in lung tumor Computed Tomography (CT) images caused by acquisition and reconstruction deficiencies. Noise of Gaussian and Rayleigh distributions with varying mean and variance was encountered in the analyzed CT images. Fisher and Bhattacharyya distance measures were used to differentiate between an original extracted lung tumor region of interest (ROI) with a filtered and noisy reconstructed versions. Through examining the texture characteristics of the lung tumor areas by five different texture measures, it was determined that the autocovariance measure was least affected and the gray level co-occurrence matrix was the most affected by noise. Depending on the selected ROI size, it was concluded that the number of extracted features from each texture measure increases susceptibility to noise

    An Analytically Based Approach for Evaluating the Impact of the Noise on the Microwave Imaging Detection

    Get PDF
    In a realistic scenario, it is inevitable to have noise on the images due to the noise from the system's hardware, which results in producing inaccurate images. This paper presents an investigation on the impact of adding noises into the simulation for an Ultra-Wideband (UWB) Microwave Imaging (MWI) procedure based on the Huygens principle (HP). A comparison between uniform and Gaussian noises at different amplitudes is provided, with the aim of investigating the detection process for applications such as bone fracture detection. This is done using analytical simulations. To construct the electric field at the perimeter of the external cylinder, simulations have been run mimicking UWB signals transmitted onto a simulated cylindrical bone-mimicking phantom containing an inclusion with different dielectric properties. This field was simulated using MATLAB and generated a value for the electric field at frequencies between 3 and 5 GHz. To investigate the impact of noise on the detection capability, two types of common noises have been applied to the signal at different amplitudes. The resulting images have visually been compared and the imaging performance has also been analysed using an image quantification metric, signal-to-clutter ratio (SCR). The impact of noise on the detection capability was quantified using this image quantification metric

    Liver CT enhancement using Fractional Differentiation and Integration

    Get PDF
    In this paper, a digital image filter is proposed to enhance the Liver CT image for improving the classification of tumors area in an infected Liver. The enhancement process is based on improving the main features within the image by utilizing the Fractional Differential and Integral in the wavelet sub-bands of an image. After enhancement, different features were extracted such as GLCM, GRLM, and LBP, among others. Then, the areas/cells are classified into tumor or non-tumor, using different models of classifiers to compare our proposed model with the original image and various established filters. Each image is divided into 15x15 non-overlapping blocks, to extract the desired features. The SVM, Random Forest, J48 and Simple Cart were trained on a supplied dataset, different from the test dataset. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of enhancement in the proposed technique

    Stopping Rules for Algebraic Iterative Reconstruction Methods in Computed Tomography

    Full text link
    Algebraic models for the reconstruction problem in X-ray computed tomography (CT) provide a flexible framework that applies to many measurement geometries. For large-scale problems we need to use iterative solvers, and we need stopping rules for these methods that terminate the iterations when we have computed a satisfactory reconstruction that balances the reconstruction error and the influence of noise from the measurements. Many such stopping rules are developed in the inverse problems communities, but they have not attained much attention in the CT world. The goal of this paper is to describe and illustrate four stopping rules that are relevant for CT reconstructions.Comment: 11 pages, 10 figure

    Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms

    Get PDF
    Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic algorithm as optimization method. A multiresolution approach was used to optimize the processing time. The algorithm was tested on computerized models of volumetric PET/CT cardiac data and on real PET/CT datasets. The proposed automatic registration algorithm smoothes the pattern of the MI and allows it to reach the global maximum of the similarity function. The implemented method also allows the definition of the correct spatial transformation that matches both synthetic and real PET and CT volumetric datasets

    1 Indirect estimation of signal-dependent noise with non-adaptive heterogeneous samples

    Get PDF
    Abstract—We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images. Index Terms—Noise estimation, signal-dependent noise, Poisson noise
    corecore