937 research outputs found

    MRI-only based radiotherapy treatment planning for the rat brain on a Small Animal Radiation Research Platform (SARRP)

    Get PDF
    Computed tomography (CT) is the standard imaging modality in radiation therapy treatment planning (RTP). However, magnetic resonance (MR) imaging provides superior soft tissue contrast, increasing the precision of target volume selection. We present MR-only based RTP for a rat brain on a small animal radiation research platform (SARRP) using probabilistic voxel classification with multiple MR sequences. Six rat heads were imaged, each with one CT and five MR sequences. The MR sequences were: T1-weighted, T2-weighted, zero-echo time (ZTE), and two ultra-short echo time sequences with 20 mu s (UTE1) and 2 ms (UTE2) echo times. CT data were manually segmented into air, soft tissue, and bone to obtain the RTP reference. Bias field corrected MR images were automatically segmented into the same tissue classes using a fuzzy c-means segmentation algorithm with multiple images as input. Similarities between segmented CT and automatic segmented MR (ASMR) images were evaluated using Dice coefficient. Three ASMR images with high similarity index were used for further RTP. Three beam arrangements were investigated. Dose distributions were compared by analysing dose volume histograms. The highest Dice coefficients were obtained for the ZTE-UTE2 combination and for the T1-UTE1-T2 combination when ZTE was unavailable. Both combinations, along with UTE1-UTE2, often used to generate ASMR images, were used for further RTP. Using 1 beam, MR based RTP underestimated the dose to be delivered to the target (range: 1.4%-7.6%). When more complex beam configurations were used, the calculated dose using the ZTE-UTE2 combination was the most accurate, with 0.7% deviation from CT, compared to 0.8% for T1-UTE1-T2 and 1.7% for UTE1-UTE2. The presented MR-only based workflow for RTP on a SARRP enables both accurate organ delineation and dose calculations using multiple MR sequences. This method can be useful in longitudinal studies where CT's cumulative radiation dose might contribute to the total dose

    CT liver tumor segmentation hybrid approach using neutrosophic sets, fast fuzzy c-means and adaptive watershed algorithm

    Get PDF
    Liver tumor segmentation from computed tomography (CT) images is a critical and challenging task. Due to the fuzziness in the liver pixel range, the neighboring organs of the liver with the same intensity, high noise and large variance of tumors. The segmentation process is necessary for the detection, identification, and measurement of objects in CT images. We perform an extensive review of the CT liver segmentation literature

    Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations

    Get PDF
    There is no denying how machine learning and computer vision have grown in the recent years. Their highest advantages lie within their automation, suitability, and ability to generate astounding results in a matter of seconds in a reproducible manner. This is aided by the ubiquitous advancements reached in the computing capabilities of current graphical processing units and the highly efficient implementation of such techniques. Hence, in this paper, we survey the key studies that are published between 2014 and 2020, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic-tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and further partitioned if the amount of works that fall under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites, containing masks of the aforementioned tissues, are thoroughly discussed, highlighting the organizers original contributions, and those of other researchers. Also, the metrics that are used excessively in literature are mentioned in our review stressing their relevancy to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing such as the scarcity of many studies on the vessels segmentation challenge, and why their absence needs to be dealt with in an accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver tissues segmentation based on automated ML-based technique

    An Automatic Technique for MRI Based Murine Abdominal Fat Measurement

    Get PDF
    Because of the well-known relationship between obesity and high incidence of diseases, fat related research using mice models is being widely investigated in preclinical experiments. In the present study, we developed a technique to automatically measure mice abdominal adipose volume and determine the depot locations using Magnetic Resonance Imaging (MRI). Our technique includes an innovative method to detect fat tissues from MR images which not only utilizes the T1 weighted intensity information, but also takes advantage of the transverse relaxation time(T2) calculated from the multiple echo data. The technique contains both a fat optimized MRI imaging acquisition protocol that works well at 7T and a newly designed post processing methodology that can automatically accomplish the fat extraction and depot recognition without user intervention in the segmentation procedure. The post processing methodology has been integrated into easy-to-use software that we have made available via free download. The method was validated by comparing automated results with two independent manual analyses in 26 mice exhibiting different fat ratios from the obesity research project. The comparison confirms a close agreement between the results in total adipose tissue size and voxel-by-voxel overlaps

    A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function

    Get PDF
    Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    A deep learning approach to bone segmentation in CT scans

    Get PDF
    This thesis proposes a deep learning approach to bone segmentation in abdominal CT scans. Segmentation is a common initial step in medical images analysis, often fundamental for computer-aided detection and diagnosis systems. The extraction of bones in CT scans is a challenging task, which if done manually by experts requires a time consuming process and that has not today a broadly recognized automatic solution. The method presented is based on a convolutional neural network, inspired by the U-Net and trained end-to-end, that performs a semantic segmentation of the data. The training dataset is made up of 21 abdominal CT scans, each one containing between 403 and 994 2D transversal images. Those images are in full resolution, 512x512 voxels, and each voxel is classified by the network into one of the following classes: background, femoral bones, hips, sacrum, sternum, spine and ribs. The output is therefore a bone mask where the bones are recognized and divided into six different classes. In the testing dataset, labeled by experts, the best model achieves a Dice coefficient as average of all bone classes of 0.93. This work demonstrates, to the best of my knowledge for the first time, the feasibility of automatic bone segmentation and classification for CT scans using a convolutional neural network

    Segmentation and Fracture Detection in CT Images for Traumatic Pelvic Injuries

    Get PDF
    In recent decades, more types and quantities of medical data have been collected due to advanced technology. A large number of significant and critical information is contained in these medical data. High efficient and automated computational methods are urgently needed to process and analyze all available medical data in order to provide the physicians with recommendations and predictions on diagnostic decisions and treatment planning. Traumatic pelvic injury is a severe yet common injury in the United States, often caused by motor vehicle accidents or fall. Information contained in the pelvic Computed Tomography (CT) images is very important for assessing the severity and prognosis of traumatic pelvic injuries. Each pelvic CT scan includes a large number of slices. Meanwhile, each slice contains a large quantity of data that may not be thoroughly and accurately analyzed via simple visual inspection with the desired accuracy and speed. Hence, a computer-assisted pelvic trauma decision-making system is needed to assist physicians in making accurate diagnostic decisions and determining treatment planning in a short period of time. Pelvic bone segmentation is a vital step in analyzing pelvic CT images and assisting physicians with diagnostic decisions in traumatic pelvic injuries. In this study, a new hierarchical segmentation algorithm is proposed to automatically extract multiplelevel bone structures using a combination of anatomical knowledge and computational techniques. First, morphological operations, image enhancement, and edge detection are performed for preliminary bone segmentation. The proposed algorithm then uses a template-based best shape matching method that provides an entirely automated segmentation process. This is followed by the proposed Registered Active Shape Model (RASM) algorithm that extracts pelvic bone tissues using more robust training models than the Standard ASM algorithm. In addition, a novel hierarchical initialization process for RASM is proposed in order to address the shortcoming of the Standard ASM, i.e. high sensitivity to initialization. Two suitable measures are defined to evaluate the segmentation results: Mean Distance and Mis-segmented Area to quantify the segmentation accuracy. Successful segmentation results indicate effectiveness and robustness of the proposed algorithm. Comparison of segmentation performance is also conducted using both the proposed method and the Snake method. A cross-validation process is designed to demonstrate the effectiveness of the training models. 3D pelvic bone models are built after pelvic bone structures are segmented from consecutive 2D CT slices. Automatic and accurate detection of the fractures from segmented bones in traumatic pelvic injuries can help physicians detect the severity of injuries in patients. The extraction of fracture features (such as presence and location of fractures) as well as fracture displacement measurement, are vital for assisting physicians in making faster and more accurate decisions. In this project, after bone segmentation, fracture detection is performed using a hierarchical algorithm based on wavelet transformation, adaptive windowing, boundary tracing and masking. Also, a quantitative measure of fracture severity based on pelvic CT scans is defined and explored. The results are promising, demonstrating that the proposed method not only capable of automatically detecting both major and minor fractures, but also has potentials to be used for clinical applications

    Leveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervision

    Get PDF
    The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision
    corecore