1,730 research outputs found

    Cube-Cut: Vertebral Body Segmentation in MRI-Data through Cubic-Shaped Divergences

    Full text link
    In this article, we present a graph-based method using a cubic template for volumetric segmentation of vertebrae in magnetic resonance imaging (MRI) acquisitions. The user can define the degree of deviation from a regular cube via a smoothness value Delta. The Cube-Cut algorithm generates a directed graph with two terminal nodes (s-t-network), where the nodes of the graph correspond to a cubic-shaped subset of the image's voxels. The weightings of the graph's terminal edges, which connect every node with a virtual source s or a virtual sink t, represent the affinity of a voxel to the vertebra (source) and to the background (sink). Furthermore, a set of infinite weighted and non-terminal edges implements the smoothness term. After graph construction, a minimal s-t-cut is calculated within polynomial computation time, which splits the nodes into two disjoint units. Subsequently, the segmentation result is determined out of the source-set. A quantitative evaluation of a C++ implementation of the algorithm resulted in an average Dice Similarity Coefficient (DSC) of 81.33% and a running time of less than a minute.Comment: 23 figures, 2 tables, 43 references, PLoS ONE 9(4): e9338

    Automated brain segmentation methods for clinical quality MRI and CT images

    Get PDF
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder associated with brain tissue loss. Accurate estimation of this loss is critical for the diagnosis, prognosis, and tracking the progression of AD. Structural magnetic resonance imaging (sMRI) and X-ray computed tomography (CT) are widely used imaging modalities that help to in vivo map brain tissue distributions. As manual image segmentations are tedious and time-consuming, automated segmentation methods are increasingly applied to head MRI and head CT images to estimate brain tissue volumes. However, existing automated methods can be applied only to images that have high spatial resolution and their accuracy on heterogeneous low-quality clinical images has not been tested. Further, automated brain tissue segmentation methods for CT are not available, although CT is more widely acquired than MRI in the clinical setting. For these reasons, large clinical imaging archives are unusable for research studies. In this work, we identify and develop automated tissue segmentation and brain volumetry methods that can be applied to clinical quality MRI and CT images. In the first project, we surveyed the current MRI methods and validated the accuracy of these methods when applied to clinical quality images. We then developed CTSeg, a tissue segmentation method for CT images, by adopting the MRI technique that exhibited the highest reliability. CTSeg is an atlas-based statistical modeling method that relies on hand-curated features and cannot be applied to images of subjects with different diseases and age groups. Advanced deep learning-based segmentation methods use hierarchical representations and learn complex features in a data-driven manner. In our final project, we develop a fully automated deep learning segmentation method that uses contextual information to segment clinical quality head CT images. The application of this method on an AD dataset revealed larger differences between brain volumes of AD and control subjects. This dissertation demonstrates the potential of applying automated methods to large clinical imaging archives to answer research questions in a variety of studies

    Feasibility of magnetic resonance imaging-based radiation therapy for brain tumour treatment

    Get PDF
    Purpose : The increasing use of MRI alongside CT images has brought about growing interest in trying to determine radiation attenuation information based on MR images only. The primary aim of this thesis is, therefore, to determine what head tissue compartments need to have separate HU values in order to obtain sufficient RT planning accuracy. This can serve as input for an MR-based classification thus enabling pseudo-CT generation in an MR-only RT workflow. Methods: To achieve this target, flattened (stratified) CT images (fCT) were generated and compared to the original CT images. Mean (ME) and mean absolute (MAE) errors were used for the fCT quality assessment, as was dose comparisons. 70 CT-based RT plans were generated and the dose distributions compared to those obtained when using the different fCT versions in place of the original CT images. The dose agreement was assessed using DVH and 1%/1mm gamma analysis. Results: The lowest MAE of 59.63 HU was calculated for an fCT8 version. DVH analysis showed low differences in the range between 3% (water-filled fCT) and 0.05% depending on the tissue stratification of the fCT version. 1%/1mm gamma analysis correctly identified plans where insufficiently fine-grained tissue classification was the main reason for dose discrepancy. The best RT planning accuracy was obtained for the fCT5 with segmented air cavities, fat, water-rich tissue, spongy, and compact bone, and for the fCT8 where also the brain tissue was stratified. Conclusions: The small differences in dose accuracy between CT and fCT images shows the feasibility of using MR-only RT planning for the brain. Nonetheless, other aspects of the MR-only workflow, such as patient positioning, as well as the impact of e.g. the surgical incisions in the skull should be subject to further research

    Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation

    Full text link
    Automatic parsing of anatomical objects in X-ray images is critical to many clinical applications in particular towards image-guided invention and workflow automation. Existing deep network models require a large amount of labeled data. However, obtaining accurate pixel-wise labeling in X-ray images relies heavily on skilled clinicians due to the large overlaps of anatomy and the complex texture patterns. On the other hand, organs in 3D CT scans preserve clearer structures as well as sharper boundaries and thus can be easily delineated. In this paper, we propose a novel model framework for learning automatic X-ray image parsing from labeled CT scans. Specifically, a Dense Image-to-Image network (DI2I) for multi-organ segmentation is first trained on X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we introduce a Task Driven Generative Adversarial Network (TD-GAN) architecture to achieve simultaneous style transfer and parsing for unseen real X-ray images. TD-GAN consists of a modified cycle-GAN substructure for pixel-to-pixel translation between DRRs and X-ray images and an added module leveraging the pre-trained DI2I to enforce segmentation consistency. The TD-GAN framework is general and can be easily adapted to other learning tasks. In the numerical experiments, we validate the proposed model on 815 DRRs and 153 topograms. While the vanilla DI2I without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of 85% which achieves the same level accuracy of supervised training (88%)

    Validity of 4DCT determined internal target volumes in radiotherapy for free breathing lung cancer patients

    Get PDF
    Background: With both high incidence and death rate, lung cancer accounts for a large burden of disease worldwide. In many cases, these patients receive radiotherapy. Commonly, margins are added to the treatment volume to avoid underdosage due to the respiration-induced tumour motion. Four-Dimensional Computer Tomography (4DCT) is an imaging technique that is capable of capturing the lung tumours as they move during respiration, which enables the creation of individualized margins. However, the technique requires regular breathing during the entire scan to avoid breathing motion artefacts. Many patients do not fulfil this requirement. Moreover, there is also a substantial risk of encountering irregularities in the breathing pattern during the 4-8 minutes that are usually needed for treatment delivery. Hence, there is a potential risk of underestimating the tumour volume and its motion, i.e., the Internal Target Volume (ITV), for patients with irregular breathing patterns. We aim to investigate the risk of underestimating a 4DCT determined ITV due to irregular breathing patterns during a typical treatment period. Method: For 5 patients, the ITV was extracted from a 4DCT scan and compared to the ITV extracted in the sum of 150 cine images (3 x 50). The cine images were acquired during 4 minutes in three different sessions. All ITVs were obtained through segmentation. Results: It was found that ITVs obtained from the 4DCT scan were smaller than the ones from the cine images case, and the statistical analysis done confirmed this at a significance level of 5%. Conclusion: We conclude that the required margin to handle respiratory-induced tumour motion can be underestimated for patients with irregular breathing pattern if the ITV is based on a conventional treatment planning 4DCT. The main cause for this is inter-fractional variations

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Image Fusion and Axial Labeling of the Spine

    Get PDF
    In order to improve radiological diagnosis of back pain and spine disease, two new algorithms have been developed to aid the 75% of Canadians who will suffer from back pain in a given year. With the associated medical imaging required for many of these patients, there is a potential for improvement in both patient care and healthcare economics by increasing the accuracy and efficiency of spine diagnosis. A real-time spine image fusion system and an automatic vertebra/disc labeling system have been developed to address this. Both magnetic resonance (MR) images and computed tomography (CT) images are often acquired for patients. The MR image highlights soft tissue detail while the CT image highlights bone detail. It is desirable to present both modalities on a single fused image containing the clinically relevant detail. The fusion problem was encoded in an energy functional balancing three competing goals for the fused image: 1) similarity to the MR image, 2) similarity to the CT image and 3) smoothness (containing natural transitions). Graph-Cut and convex solutions have been developed. They have similar performance to each other and outperform other fusion methods from recent literature. The convex solution has real-time performance on modern graphics processing units, allowing for interactive control of the fused image. Clinical validation has been conducted on the convex solution based on 15 patient images. The fused images have been shown to increase confidence of diagnosis compared to unregistered MR and CT images, with no change in time for diagnosis based on readings from 5 radiologists. Spinal vertebrae serve as a reference for the location of surrounding tissues, but vertebrae have a very similar appearance to each other, making it time consume for radiologist to keep track of their locations. To automate this, an axial MR labeling algorithm was developed that runs in near real-time. Probability product kernels and fast integral images combined with simple geometric rules were used to classify pixels, slices and vertebrae. Evaluation was conducted on 32 lumbar spine images and 24 cervical spine images. The algorithm demonstrated 99% and 79% accuracy on the lumbar and cervical spine respectively

    Thoracic Cartilage Ultrasound-CT Registration using Dense Skeleton Graph

    Full text link
    Autonomous ultrasound (US) imaging has gained increased interest recently, and it has been seen as a potential solution to overcome the limitations of free-hand US examinations, such as inter-operator variations. However, it is still challenging to accurately map planned paths from a generic atlas to individual patients, particularly for thoracic applications with high acoustic-impedance bone structures under the skin. To address this challenge, a graph-based non-rigid registration is proposed to enable transferring planned paths from the atlas to the current setup by explicitly considering subcutaneous bone surface features instead of the skin surface. To this end, the sternum and cartilage branches are segmented using a template matching to assist coarse alignment of US and CT point clouds. Afterward, a directed graph is generated based on the CT template. Then, the self-organizing map using geographical distance is successively performed twice to extract the optimal graph representations for CT and US point clouds, individually. To evaluate the proposed approach, five cartilage point clouds from distinct patients are employed. The results demonstrate that the proposed graph-based registration can effectively map trajectories from CT to the current setup for displaying US views through limited intercostal space. The non-rigid registration results in terms of Hausdorff distance (Mean±\pmSD) is 9.48±\pm0.27 mm and the path transferring error in terms of Euclidean distance is 2.21±\pm1.11 mm.Comment: Accepted by IROS2
    • …
    corecore