827 research outputs found

    An Automatic Level Set Based Liver Segmentation from MRI Data Sets

    Get PDF
    A fast and accurate liver segmentation method is a challenging work in medical image analysis area. Liver segmentation is an important process for computer-assisted diagnosis, pre-evaluation of liver transplantation and therapy planning of liver tumors. There are several advantages of magnetic resonance imaging such as free form ionizing radiation and good contrast visualization of soft tissue. Also, innovations in recent technology and image acquisition techniques have made magnetic resonance imaging a major tool in modern medicine. However, the use of magnetic resonance images for liver segmentation has been slow when we compare applications with the central nervous systems and musculoskeletal. The reasons are irregular shape, size and position of the liver, contrast agent effects and similarities of the gray values of neighbor organs. Therefore, in this study, we present a fully automatic liver segmentation method by using an approximation of the level set based contour evolution from T2 weighted magnetic resonance data sets. The method avoids solving partial differential equations and applies only integer operations with a two-cycle segmentation algorithm. The efficiency of the proposed approach is achieved by applying the algorithm to all slices with a constant number of iteration and performing the contour evolution without any user defined initial contour. The obtained results are evaluated with four different similarity measures and they show that the automatic segmentation approach gives successful results

    Virtual liver biopsy: image processing and 3D visualization

    Get PDF

    Machine Learning Techniques for Quantification of Knee Segmentation from MRI

    Get PDF
    © 2020 Sujeet More et al. Magnetic resonance imaging (MRI) is precise and efficient for interpreting the soft and hard tissues. Moreover, for the detailed diagnosis of varied diseases such as knee rheumatoid arthritis (RA), segmentation of the knee magnetic resonance image is a challenging and complex task that has been explored broadly. However, the accuracy and reproducibility of segmentation approaches may require prior extraction of tissues from MR images. The advances in computational methods for segmentation are reliant on several parameters such as the complexity of the tissue, quality, and acquisition process involved. This review paper focuses and briefly describes the challenges faced by segmentation techniques from magnetic resonance images followed by an overview of diverse categories of segmentation approaches. The review paper also focuses on automatic approaches and semiautomatic approaches which are extensively used with performance metrics and sufficient achievement for clinical trial assistance. Furthermore, the results of different approaches related to MR sequences used to image the knee tissues and future aspects of the segmentation are discussed

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation

    Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software.

    Get PDF
    ObjectiveThe purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software.Materials and methodsMR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic.ResultsOur study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant.ConclusionThe use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

    CT Coronary Angiography with 100kV tube voltage and a low noise reconstruction filter in non-obese patients: evaluation of radiation dose and diagnostic quality of 2D and 3D image reconstructions using open source software (OsiriX)

    Get PDF
    INTRODUCTION AND PURPOSE. Computed tomography coronary angiography (CTCA) has seen a dramatic evolution in the last decade owing to the availability of multislice CT scanners with 64 detector rows and beyond. However, this evolution has been paralleled by an increase in radiation dose to patients, that can reach extremely high levels (>20mSv) when retrospective ECG-gating techniques are used. On CT angiography, reduction of tube voltage allows to cut radiation dose with improved contrast resolution due to the lower energy of the X-ray beam and increased photoelectric effect. Our purpose is twofold: 1) to evaluate the radiation dose of CTCA studies carried out using a tube voltage of 100kV and a low noise reconstruction filter, compared with a conventional tube voltage of 120kV and a standard reconstruction kernel; 2) to assess the impact of the 100kV acquisition technique on the diagnostic quality of 2D and 3D image reconstructions performed with open source software (OsiriX). MATERIALS AND METHODS. Fifty-one non-obese patients underwent CTCA on a 64-row CT scanner. Out of them, 28 were imaged using a tube voltage of 100kV and a low noise reconstruction filter, while in the remaining 23 patients a tube voltage of 120kV and a standard reconstruction kernel were selected. All CTCA datasets were exported via PACS to a Macintosh™ computer (iMac™) running OsiriX 4.0 (64-bit version), and Maximum Intensity Projection (MIP), Curved Planar Reformation (CPR), and Volume Rendering (VR) views of each coronary artery were generated using a dedicated plug-in (CMIV CTA; Linköping University, Sweden). Diagnostic quality of MIP, CPR, and VR reconstructions was assessed visually by two radiologists with experience in cardiac CT using a three-point score (1=poor, 2=good, 3=excellent). Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), intravascular CT density, and effective dose for each group were also calculated. RESULTS. Image quality of VR views was significantly better with the 100kV than with the 120kV protocol (2.77±0.43 vs 2.21±0.85, p=0.0332), while that of MIP and CPR reconstructions was comparable (2.59±0.50 vs 2.32±0.75, p=0.3271, and 2.68±0.48 vs 2.32±0.67, p=0.1118, respectively). SNR and CNR were comparable between the two protocols (16.42±4.64 vs 14.78±2.57, p=0.2502, and 13.43±3.77 vs 12.08±2.10, p=0.2486, respectively), but in the 100kV group aortic root density was higher (655.9±127.2 HU vs 517.2±69.7 HU, p=0.0016) and correlated with VR image quality (rs=0.5409, p=0.0025). Effective dose was significantly lower with the 100kV than with the 120kV protocol (7.43±2.69 mSv vs 18.83±3.60 mSv, p<0.0001). CONCLUSIONS. Compared with a standard tube voltage of 120kV, usage of 100kV and a low noise filter leads to a significant reduction of radiation dose with equivalent and higher diagnostic quality of 2D and 3D reconstructions, respectively in non-obese patients
    • …
    corecore