1,090 research outputs found

    A spatio-temporal atlas of the developing fetal brain with spina bifida aperta [version 2; peer review: 2 approved]

    Get PDF
    Background: Spina bifida aperta (SBA) is a birth defect associated with severe anatomical changes in the developing fetal brain. Brain magnetic resonance imaging (MRI) atlases are popular tools for studying neuropathology in the brain anatomy, but previous fetal brain MRI atlases have focused on the normal fetal brain. We aimed to develop a spatio-temporal fetal brain MRI atlas for SBA. Methods: We developed a semi-automatic computational method to compute the first spatio-temporal fetal brain MRI atlas for SBA. We used 90 MRIs of fetuses with SBA with gestational ages ranging from 21 to 35 weeks. Isotropic and motion-free 3D reconstructed MRIs were obtained for all the examinations. We propose a protocol for the annotation of anatomical landmarks in brain 3D MRI of fetuses with SBA with the aim of making spatial alignment of abnormal fetal brain MRIs more robust. In addition, we propose a weighted generalized Procrustes method based on the anatomical landmarks for the initialization of the atlas. The proposed weighted generalized Procrustes can handle temporal regularization and missing annotations. After initialization, the atlas is refined iteratively using non-linear image registration based on the image intensity and the anatomical land-marks. A semi-automatic method is used to obtain a parcellation of our fetal brain atlas into eight tissue types: white matter, ventricular system, cerebellum, extra-axial cerebrospinal fluid, cortical gray matter, deep gray matter, brainstem, and corpus callosum. Results: An intra-rater variability analysis suggests that the seven anatomical land-marks are sufficiently reliable. We find that the proposed atlas outperforms a normal fetal brain atlas for the automatic segmentation of brain 3D MRI of fetuses with SBA. Conclusions: We make publicly available a spatio-temporal fetal brain MRI atlas for SBA, available here: https://doi.org/10.7303/syn25887675. This atlas can support future research on automatic segmentation methods for brain 3D MRI of fetuses with SBA

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    Generative Interpretation of Medical Images

    Get PDF

    3D cephalometric landmark detection by multiple stage deep reinforcement learning

    Get PDF
    The lengthy time needed for manual landmarking has delayed the widespread adoption of three-dimensional (3D) cephalometry. We here propose an automatic 3D cephalometric annotation system based on multi-stage deep reinforcement learning (DRL) and volume-rendered imaging. This system considers geometrical characteristics of landmarks and simulates the sequential decision process underlying human professional landmarking patterns. It consists mainly of constructing an appropriate two-dimensional cutaway or 3D model view, then implementing single-stage DRL with gradient-based boundary estimation or multi-stage DRL to dictate the 3D coordinates of target landmarks. This system clearly shows sufficient detection accuracy and stability for direct clinical applications, with a low level of detection error and low inter-individual variation (1.96 ± 0.78 mm). Our system, moreover, requires no additional steps of segmentation and 3D mesh-object construction for landmark detection. We believe these system features will enable fast-track cephalometric analysis and planning and expect it to achieve greater accuracy as larger CT datasets become available for training and testing.ope

    Novel Approaches to the Representation and Analysis of 3D Segmented Anatomical Districts

    Get PDF
    Nowadays, image processing and 3D shape analysis are an integral part of clinical practice and have the potentiality to support clinicians with advanced analysis and visualization techniques. Both approaches provide visual and quantitative information to medical practitioners, even if from different points of view. Indeed, shape analysis is aimed at studying the morphology of anatomical structures, while image processing is focused more on the tissue or functional information provided by the pixels/voxels intensities levels. Despite the progress obtained by research in both fields, a junction between these two complementary worlds is missing. When working with 3D models analyzing shape features, the information of the volume surrounding the structure is lost, since a segmentation process is needed to obtain the 3D shape model; however, the 3D nature of the anatomical structure is represented explicitly. With volume images, instead, the tissue information related to the imaged volume is the core of the analysis, while the shape and morphology of the structure are just implicitly represented, thus not clear enough. The aim of this Thesis work is the integration of these two approaches in order to increase the amount of information available for physicians, allowing a more accurate analysis of each patient. An augmented visualization tool able to provide information on both the anatomical structure shape and the surrounding volume through a hybrid representation, could reduce the gap between the two approaches and provide a more complete anatomical rendering of the subject. To this end, given a segmented anatomical district, we propose a novel mapping of volumetric data onto the segmented surface. The grey-levels of the image voxels are mapped through a volume-surface correspondence map, which defines a grey-level texture on the segmented surface. The resulting texture mapping is coherent to the local morphology of the segmented anatomical structure and provides an enhanced visual representation of the anatomical district. The integration of volume-based and surface-based information in a unique 3D representation also supports the identification and characterization of morphological landmarks and pathology evaluations. The main research contributions of the Ph.D. activities and Thesis are: \u2022 the development of a novel integration algorithm that combines surface-based (segmented 3D anatomical structure meshes) and volume-based (MRI volumes) information. The integration supports different criteria for the grey-levels mapping onto the segmented surface; \u2022 the development of methodological approaches for using the grey-levels mapping together with morphological analysis. The final goal is to solve problems in real clinical tasks, such as the identification of (patient-specific) ligament insertion sites on bones from segmented MR images, the characterization of the local morphology of bones/tissues, the early diagnosis, classification, and monitoring of muscle-skeletal pathologies; \u2022 the analysis of segmentation procedures, with a focus on the tissue classification process, in order to reduce operator dependency and to overcome the absence of a real gold standard for the evaluation of automatic segmentations; \u2022 the evaluation and comparison of (unsupervised) segmentation methods, finalized to define a novel segmentation method for low-field MR images, and for the local correction/improvement of a given segmentation. The proposed method is simple but effectively integrates information derived from medical image analysis and 3D shape analysis. Moreover, the algorithm is general enough to be applied to different anatomical districts independently of the segmentation method, imaging techniques (such as CT), or image resolution. The volume information can be integrated easily in different shape analysis applications, taking into consideration not only the morphology of the input shape but also the real context in which it is inserted, to solve clinical tasks. The results obtained by this combined analysis have been evaluated through statistical analysis

    Human Pose Estimation with Implicit Shape Models

    Get PDF
    This work presents a new approach for estimating 3D human poses based on monocular camera information only. For this, the Implicit Shape Model is augmented by new voting strategies that allow to localize 2D anatomical landmarks in the image. The actual 3D pose estimation is then formulated as a Particle Swarm Optimization (PSO) where projected 3D pose hypotheses are compared with the generated landmark vote distributions

    A fully automatic method for vascular tortuosity feature extraction in the supra-aortic region: unraveling possibilities in stroke treatment planning

    Get PDF
    Vascular tortuosity of supra-aortic vessels is widely considered one of the main reasons for failure and delays in endovascular treatment of large vessel occlusion in patients with acute ischemic stroke. Characterization of tortuosity is a challenging task due to the lack of objective, robust and effective analysis tools. We present a fully automatic method for arterial segmentation, vessel labelling and tortuosity feature extraction applied to the supra-aortic region. A sample of 566 computed tomography angiography scans from acute ischemic stroke patients (aged 74.8 ± 12.9, 51.0% females) were used for training, validation and testing of a segmentation module based on a U-Net architecture (162 cases) and a vessel labelling module powered by a graph U-Net (566 cases). Successively, 30 cases were processed for testing of a tortuosity feature extraction module. Measurements obtained through automatic processing were compared to manual annotations from two observers for a thorough validation of the method. The proposed feature extraction method presented similar performance to the inter-rater variability observed in the measurement of 33 geometrical and morphological features of the arterial anatomy in the supra-aortic region. This system will contribute to the development of more complex models to advance the treatment of stroke by adding immediate automation, objectivity, repeatability and robustness to the vascular tortuosity characterization of patients

    Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language

    Get PDF
    Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes move an avatar’s brows in a competing manner. This breakthrough brings the state of the art one step closer to the goal of an automatic English-to-ASL translator. Conference proceedings from the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, Barcelona, Spain, 21-24 February, 2013. Edited by Sabine Coquillart, Carlos Andújar, Robert S. Laramee, Andreas Kerren, José Braz. Barcelona, Spain. SciTePress 2013. 407-416
    corecore