565 research outputs found

    A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function

    Get PDF
    Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term

    Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation.

    Get PDF
    PurposeWith the advent of MR guided radiotherapy, internal organ motion can be imaged simultaneously during treatment. In this study, we evaluate the feasibility of pancreas MRI segmentation using state-of-the-art segmentation methods.Methods and materialT2 weighted HASTE and T1 weighted VIBE images were acquired on 3 patients and 2 healthy volunteers for a total of 12 imaging volumes. A novel dictionary learning (DL) method was used to segment the pancreas and compared to t mean-shift merging (MSM), distance regularized level set (DRLS), graph cuts (GC) and the segmentation results were compared to manual contours using Dice's index (DI), Hausdorff distance and shift of the-center-of-the-organ (SHIFT).ResultsAll VIBE images were successfully segmented by at least one of the auto-segmentation method with DI >0.83 and SHIFT ≤2 mm using the best automated segmentation method. The automated segmentation error of HASTE images was significantly greater. DL is statistically superior to the other methods in Dice's overlapping index. For the Hausdorff distance and SHIFT measurement, DRLS and DL performed slightly superior to the GC method, and substantially superior to MSM. DL required least human supervision and was faster to compute.ConclusionOur study demonstrated potential feasibility of automated segmentation of the pancreas on MRI images with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    LIVER INTENSITY DETERMINATION IN THE 3D ABDOMINAL MR IMAGE USING NEURAL NETWORK

    Get PDF
    This study presents an approach to automatically identify the liver range intensity in the 3D abdominal MR images using neural network. The proposed scheme consists of three main stages. First, the T1-weighted MR images of the liver in the portal-venous phase are reduced noise by applying the anisotropic diffusion algorithm. The histogram of the 3D reduced image is determined. The function approximation is applied to the computed histogram by using the neural network. The peaks are computed and the peak corresponding to the liver region is determined. This peak plays an important role for a fully automatic liver segmentation. The another salient point of this proposed approach is that the neural network is trained by an effective algorithm called extreme learning machine, this algorithm can offer a good performance with high learning speed in many applications

    Deep Networks Based Energy Models for Object Recognition from Multimodality Images

    Get PDF
    Object recognition has been extensively investigated in computer vision area, since it is a fundamental and essential technique in many important applications, such as robotics, auto-driving, automated manufacturing, and security surveillance. According to the selection criteria, object recognition mechanisms can be broadly categorized into object proposal and classification, eye fixation prediction and saliency object detection. Object proposal tends to capture all potential objects from natural images, and then classify them into predefined groups for image description and interpretation. For a given natural image, human perception is normally attracted to the most visually important regions/objects. Therefore, eye fixation prediction attempts to localize some interesting points or small regions according to human visual system (HVS). Based on these interesting points and small regions, saliency object detection algorithms propagate the important extracted information to achieve a refined segmentation of the whole salient objects. In addition to natural images, object recognition also plays a critical role in clinical practice. The informative insights of anatomy and function of human body obtained from multimodality biomedical images such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS), computed tomography (CT) and positron emission tomography (PET) facilitate the precision medicine. Automated object recognition from biomedical images empowers the non-invasive diagnosis and treatments via automated tissue segmentation, tumor detection and cancer staging. The conventional recognition methods normally utilize handcrafted features (such as oriented gradients, curvature, Haar features, Haralick texture features, Laws energy features, etc.) depending on the image modalities and object characteristics. It is challenging to have a general model for object recognition. Superior to handcrafted features, deep neural networks (DNN) can extract self-adaptive features corresponding with specific task, hence can be employed for general object recognition models. These DNN-features are adjusted semantically and cognitively by over tens of millions parameters corresponding to the mechanism of human brain, therefore leads to more accurate and robust results. Motivated by it, in this thesis, we proposed DNN-based energy models to recognize object on multimodality images. For the aim of object recognition, the major contributions of this thesis can be summarized below: 1. We firstly proposed a new comprehensive autoencoder model to recognize the position and shape of prostate from magnetic resonance images. Different from the most autoencoder-based methods, we focused on positive samples to train the model in which the extracted features all come from prostate. After that, an image energy minimization scheme was applied to further improve the recognition accuracy. The proposed model was compared with three classic classifiers (i.e. support vector machine with radial basis function kernel, random forest, and naive Bayes), and demonstrated significant superiority for prostate recognition on magnetic resonance images. We further extended the proposed autoencoder model for saliency object detection on natural images, and the experimental validation proved the accurate and robust saliency object detection results of our model. 2. A general multi-contexts combined deep neural networks (MCDN) model was then proposed for object recognition from natural images and biomedical images. Under one uniform framework, our model was performed in multi-scale manner. Our model was applied for saliency object detection from natural images as well as prostate recognition from magnetic resonance images. Our experimental validation demonstrated that the proposed model was competitive to current state-of-the-art methods. 3. We designed a novel saliency image energy to finely segment salient objects on basis of our MCDN model. The region priors were taken into account in the energy function to avoid trivial errors. Our method outperformed state-of-the-art algorithms on five benchmarking datasets. In the experiments, we also demonstrated that our proposed saliency image energy can boost the results of other conventional saliency detection methods

    Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions

    Get PDF
    Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.publishedVersio

    An Interactive Automation for Human Biliary Tree Diagnosis Using Computer Vision

    Get PDF
    The biliary tree is a network of tubes that connects the liver to the gallbladder, an organ right beneath it. The bile duct is the major tube in the biliary tree. The dilatation of a bile duct is a key indicator for more major problems in the human body, such as stones and tumors, which are frequently caused by the pancreas or the papilla of vater. The detection of bile duct dilatation can be challenging for beginner or untrained medical personnel in many circumstances. Even professionals are unable to detect bile duct dilatation with the naked eye. This research presents a unique vision-based model for biliary tree initial diagnosis. To segment the biliary tree from the Magnetic Resonance Image, the framework used different image processing approaches (MRI). After the image’s region of interest was segmented, numerous calculations were performed on it to extract 10 features, including major and minor axes, bile duct area, biliary tree area, compactness, and some textural features (contrast, mean, variance and correlation). This study used a database of images from King Hussein Medical Center in Amman, Jordan, which included 200 MRI images, 100 normal cases, and 100 patients with dilated bile ducts. After the characteristics are extracted, various classifiers are used to determine the patients’ condition in terms of their health (normal or dilated). The findings demonstrate that the extracted features perform well with all classifiers in terms of accuracy and area under the curve. This study is unique in that it uses an automated approach to segment the biliary tree from MRI images, as well as scientifically correlating retrieved features with biliary tree status that has never been done before in the literature
    corecore