10 research outputs found

    Automatic Segmentation of the Lumbar Spine from Medical Images

    Get PDF
    Segmentation of the lumbar spine in 3D is a necessary step in numerous medical applications, but remains a challenging problem for computational methods due to the complex and varied shape of the anatomy and the noise and other artefacts often present in the images. While manual annotation of anatomical objects such as vertebrae is often carried out with the aid of specialised software, obtaining even a single example can be extremely time-consuming. Automating the segmentation process is the only feasible way to obtain accurate and reliable segmentations on any large scale. This thesis describes an approach for automatic segmentation of the lumbar spine from medical images; specifically those acquired using magnetic resonance imaging (MRI) and computed tomography (CT). The segmentation problem is formulated as one of assigning class labels to local clustered regions of an image (called superpixels in 2D or supervoxels in 3D). Features are introduced in 2D and 3D which can be used to train a classifier for estimating the class labels of the superpixels or supervoxels. Spatial context is introduced by incorporating the class estimates into a conditional random field along with a learned pairwise metric. Inference over the resulting model can be carried out very efficiently, enabling an accurate pixel- or voxel-level segmentation to be recovered from the labelled regions. In contrast to most previous work in the literature, the approach does not rely on explicit prior shape information. It therefore avoids many of the problems associated with these methods, such as the need to construct a representative prior model of anatomical shape from training data and the approximate nature of the optimisation. The general-purpose nature of the proposed method means that it can be used to accurately segment both vertebrae and intervertebral discs from medical images without fundamental change to the model. Evaluation of the approach shows it to obtain accurate and robust performance in the presence of significant anatomical variation. The median average symmetric surface distances for 2D vertebra segmentation were 0.27mm on MRI data and 0.02mm on CT data. For 3D vertebra segmentation the median surface distances were 0.90mm on MRI data and 0.20mm on CT data. For 3D intervertebral disc segmentation a median surface distance of 0.54mm was obtained on MRI data

    Magnetic resonance image-based brain tumour segmentation methods : a systematic review

    Get PDF
    Background: Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose: To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods: We conducted a systematic review of 572 brain tumour segmentation studies during 2015–2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests: We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results: We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion: U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    è…č郚CTćƒäžŠăźè€‡æ•°ă‚Șăƒ–ă‚žă‚§ă‚Żăƒˆăźă‚»ă‚°ăƒĄăƒłăƒ†ăƒŒă‚·ăƒ§ăƒłăźăŸă‚ăźç”±èšˆçš„æ‰‹æł•ă«é–ąă™ă‚‹ç ”ç©¶

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.äčć·žć·„æ„­ć€§ć­ŠćšćŁ«ć­Šäœè«–æ–‡ ć­Šäœèš˜ç•Șć·: ć·„ćšç”Č珏546ć· ć­ŠäœæŽˆäžŽćčŽæœˆæ—„: ä»€ć’Œ4ćčŽ3月25æ—„1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlookäčć·žć·„æ„­ć€§ć­Šä»€ć’Œ3ćčŽ

    Brain Tumor Detection and Segmentation in Multisequence MRI

    Get PDF
    Tato prĂĄce se zabĂœvĂĄ detekcĂ­ a segmentacĂ­ mozkovĂ©ho nĂĄdoru v multisekvenčnĂ­ch MR obrazech se zaměƙenĂ­m na gliomy vysokĂ©ho a nĂ­zkĂ©ho stupně malignity. Jsou zde pro tento Ășčel navrĆŸeny tƙi metody. PrvnĂ­ metoda se zabĂœvĂĄ detekcĂ­ prezence částĂ­ mozkovĂ©ho nĂĄdoru v axiĂĄlnĂ­ch a koronĂĄrnĂ­ch ƙezech. JednĂĄ se o algoritmus zaloĆŸenĂœ na analĂœze symetrie pƙi rĆŻznĂœch rozliĆĄenĂ­ch obrazu, kterĂœ byl otestovĂĄn na T1, T2, T1C a FLAIR obrazech. DruhĂĄ metoda se zabĂœvĂĄ extrakcĂ­ oblasti celĂ©ho mozkovĂ©ho nĂĄdoru, zahrnujĂ­cĂ­ oblast jĂĄdra tumoru a edĂ©mu, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkovĂœ nĂĄdor z 2D i 3D obrazĆŻ. Je zde opět vyuĆŸita analĂœza symetrie, kterĂĄ je nĂĄsledovĂĄna automatickĂœm stanovenĂ­m intenzitnĂ­ho prahu z nejvĂ­ce asymetrickĂœch částĂ­. TƙetĂ­ metoda je zaloĆŸena na predikci lokĂĄlnĂ­ struktury a je schopna segmentovat celou oblast nĂĄdoru, jeho jĂĄdro i jeho aktivnĂ­ část. Metoda vyuĆŸĂ­vĂĄ faktu, ĆŸe větĆĄina lĂ©kaƙskĂœch obrazĆŻ vykazuje vysokou podobnost intenzit sousednĂ­ch pixelĆŻ a silnou korelaci mezi intenzitami v rĆŻznĂœch obrazovĂœch modalitĂĄch. JednĂ­m ze zpĆŻsobĆŻ, jak s touto korelacĂ­ pracovat a pouĆŸĂ­vat ji, je vyuĆŸitĂ­ lokĂĄlnĂ­ch obrazovĂœch polĂ­. PodobnĂĄ korelace existuje takĂ© mezi sousednĂ­mi pixely v anotaci obrazu. Tento pƙíznak byl vyuĆŸit v predikci lokĂĄlnĂ­ struktury pƙi lokĂĄlnĂ­ anotaci polĂ­. Jako klasifikačnĂ­ algoritmus je v tĂ©to metodě pouĆŸita konvolučnĂ­ neuronovĂĄ sĂ­Ć„ vzhledem k jejĂ­ znĂĄme schopnosti zachĂĄzet s korelacĂ­ mezi pƙíznaky. VĆĄechny tƙi metody byly otestovĂĄny na veƙejnĂ© databĂĄzi 254 multisekvenčnĂ­ch MR obrazech a byla dosĂĄhnuta pƙesnost srovnatelnĂĄ s nejmodernějĆĄĂ­mi metodami v mnohem kratĆĄĂ­m vĂœpočetnĂ­m čase (v ƙádu sekund pƙi pouĆŸitĂœ CPU), coĆŸ poskytuje moĆŸnost manuĂĄlnĂ­ch Ășprav pƙi interaktivnĂ­ segmetaci.This work deals with the brain tumor detection and segmentation in multisequence MR images with particular focus on high- and low-grade gliomas. Three methods are propose for this purpose. The first method deals with the presence detection of brain tumor structures in axial and coronal slices. This method is based on multi-resolution symmetry analysis and it was tested for T1, T2, T1C and FLAIR images. The second method deals with extraction of the whole brain tumor region, including tumor core and edema, in FLAIR and T2 images and is suitable to extract the whole brain tumor region from both 2D and 3D. It also uses the symmetry analysis approach which is followed by automatic determination of the intensity threshold from the most asymmetric parts. The third method is based on local structure prediction and it is able to segment the whole tumor region as well as tumor core and active tumor. This method takes the advantage of a fact that most medical images feature a high similarity in intensities of nearby pixels and a strong correlation of intensity profiles across different image modalities. One way of dealing with -- and even exploiting -- this correlation is the use of local image patches. In the same way, there is a high correlation between nearby labels in image annotation, a feature that has been used in the ``local structure prediction'' of local label patches. Convolutional neural network is chosen as a learning algorithm, as it is known to be suited for dealing with correlation between features. All three methods were evaluated on a public data set of 254 multisequence MR volumes being able to reach comparable results to state-of-the-art methods in much shorter computing time (order of seconds running on CPU) providing means, for example, to do online updates when aiming at an interactive segmentation.

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF
    corecore