150 research outputs found

    Automatic segmentation of right ventricle in cardiac cine MR images using a saliency analysis

    Get PDF
    PURPOSE: Accurate measurement of the right ventricle (RV) volume is important for the assessment of the ventricular function and a biomarker of the progression of any cardiovascular disease. However, the high RV variability makes difficult a proper delineation of the myocardium wall. This paper introduces a new automatic method for segmenting the RV volume from short axis cardiac magnetic resonance (MR) images by a salient analysis of temporal and spatial observations. METHODS: The RV volume estimation starts by localizing the heart as the region with the most coherent motion during the cardiac cycle. Afterward, the ventricular chambers are identified at the basal level using the isodata algorithm, the right ventricle extracted, and its centroid computed. A series of radial intensity profiles, traced from this centroid, is used to search a salient intensity pattern that models the inner-outer myocardium boundary. This process is iteratively applied toward the apex, using the segmentation of the previous slice as a regularizer. The consecutive 2D segmentations are added together to obtain the final RV endocardium volume that serves to estimate also the epicardium. RESULTS: Experiments performed with a public dataset, provided by the RV segmentation challenge in cardiac MRI, demonstrated that this method is highly competitive with respect to the state of the art, obtaining a Dice score of 0.87, and a Hausdorff distance of 7.26 mm while a whole volume was segmented in about 3 s. CONCLUSIONS: The proposed method provides an useful delineation of the RV shape using only the spatial and temporal information of the cine MR images. This methodology may be used by the expert to achieve cardiac indicators of the right ventricle function

    Neural Implicit Surface Reconstruction of Freehand 3D Ultrasound Volume with Geometric Constraints

    Full text link
    Three-dimensional (3D) freehand ultrasound (US) is a widely used imaging modality that allows non-invasive imaging of medical anatomy without radiation exposure. Surface reconstruction of US volume is vital to acquire the accurate anatomical structures needed for modeling, registration, and visualization. However, traditional methods cannot produce a high-quality surface due to image noise. Despite improvements in smoothness, continuity, and resolution from deep learning approaches, research on surface reconstruction in freehand 3D US is still limited. This study introduces FUNSR, a self-supervised neural implicit surface reconstruction method to learn signed distance functions (SDFs) from US volumes. In particular, FUNSR iteratively learns the SDFs by moving the 3D queries sampled around volumetric point clouds to approximate the surface, guided by two novel geometric constraints: sign consistency constraint and onsurface constraint with adversarial learning. Our approach has been thoroughly evaluated across four datasets to demonstrate its adaptability to various anatomical structures, including a hip phantom dataset, two vascular datasets and one publicly available prostate dataset. We also show that smooth and continuous representations greatly enhance the visual appearance of US data. Furthermore, we highlight the potential of our method to improve segmentation performance, and its robustness to noise distribution and motion perturbation.Comment: Preprin

    MyoPS A Benchmark of Myocardial Pathology Segmentation Combining Three-Sequence Cardiac Magnetic Resonance Images

    Get PDF
    Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore potential of solutions, as well as to provide a benchmark for future research. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. Note that MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/)

    Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge

    Get PDF
    The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field

    Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation:The M&Ms Challenge

    Get PDF
    The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&amp;Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.</p

    Nonlinear Filtering Algorithms for Multitarget Tracking

    No full text
    Tracking multiple targets with uncertain target dynamics is a difficult problem, especially with nonlinear state and/or measurement equations. Random finite set theory provides a rigorous foundation to multitarget tracking problems. It provides a framework to represent the full multitarget posterior in contrast to other conventional approaches. However, the computational complexity of performing multitarget recursion grows exponentially with the number of targets. The Probability Hypothesis Density (PHD) filter, which only propagates the first moment of the multitarget posterior, requires much less computational complexity. This thesis addresses some of the essential issues related to practical multitarget tracking problems such as tracking target maneuvers, stealthy targets, multitarget tracking in a distributed framework. With maneuvering targets, detecting and tracking the changes in the target motion model also becomes important and an effective solution for this problem using multiple-model based PHD filter is proposed. The proposed filter has the advantage over the other methods in that it can track a timevarying number of targets in nonlinear/ non-Gaussian systems. Recent developments in stealthy military aircraft and cruise missiles have emphasized the need to t rack low SNR targets. The conventional approach of thresholding the measurements throws away potential information and thus results in poor performance in tracking dim targets. The problem becomes even more complicated when multiple dim targets are present in the surveillance region. A PHD filter based recursive track-before-detect approach is proposed in this thesis to track multiple dim targets in a computationally efficient way. This thesis also investigates multiple target tracking using a network of sensors. Generally, sensor networks have limited energy, communication capability and computational power. The crucial consideration is what information needs to be transmitted over the network in order to perform online estimation of the current state of the monitored system, whilst attempting to minimize communication overhead. Finally, a novel continuous approximation approach for nonlinear/ non-Gaussian Bayesian tracking system based on spline interpolation is presented. The resulting filter has the advantages over the widely-known discrete particle based approximation approach in that it does not suffer from degeneracy problems and retains accurate density over the state space. The filter is general enough to be applicable to nonlinear/non-Gaussian system and the density could even be multi-modal.ThesisCandidate in Philosoph

    MODELLING OF ENERGY STORAGE FOR INDOOR/OUTDOOR LIGHT ENERGY HARVESTING

    No full text
    With IoT, billions of physical sensors/actuators nodes will be connected to the internet. Due to their massive quantity, the only sustainable way to power them is through environmental energy harvesting (EH). The major subsystems of an indoor/outdoor lighting EH system are the PV cell to harvest light energy, and energy storage device to store the harvested energy. In this project, the energy storage and wireless sensor node were worked on primarily. Energy storage devices that were used to power up the wireless sensor node include NiMH cells. The wireless sensor node consisted of a temperature and humidity sensor which then transmitted data via radio modules. To obtain the battery characteristics in the energy harvesting system, some parameters had to be acquired. These included the series resistance, parallel resistance, and parallel capacitance. This was done by pulse charging a battery at 1C for 1 minute. Then, the parameters were obtained using several formulas. The data obtained from the discharge experiment was also input into a simulation model. From there, comparisons between experimental and simulated data were made to verify whether the modelling of the battery is within the boundaries or not
    corecore