4,906 research outputs found

    Learning to segment fetal brain tissue from noisy annotations

    Full text link
    Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19-39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI

    High performance computing for 3D image segmentation

    Get PDF
    Digital image processing is a very popular and still very promising eld of science, which has been successfully applied to numerous areas and problems, reaching elds like forensic analysis, security systems, multimedia processing, aerospace, automotive, and many more. A very important part of the image processing area is image segmentation. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad eld and they have received signi cant amount of research interest A good example of an area, in which image processing plays a constantly growing role, is the eld of medical solutions. The expectations and demands that are presented in this branch of science are very high and dif cult to meet for the applied technology. The problems are challenging and the potential bene ts are signi cant and clearly visible. For over thirty years image processing has been applied to different problems and questions in medicine and the practitioners have exploited the rich possibilities that it offered. As a result, the eld of medicine has seen signi cant improvements in the interpretation of examined medical data. Clearly, the medical knowledge has also evolved signi cantly over these years, as well as the medical equipment that serves doctors and researchers. Also the common computer hardware, which is present at homes, of ces and laboratories, is constantly evolving and changing. All of these factors have sculptured the shape of modern image processing techniques and established in which ways it is currently used and developed. Modern medical image processing is centered around 3D images with high spatial and temporal resolution, which can bring a tremendous amount of data for medical practitioners. Processing of such large sets of data is not an easy task, requiring high computational power. Furthermore, in present times the computational power is not as easily available as in recent years, as the growth of possibilities of a single processing unit is very limited - a trend towards multi-unit processing and parallelization of the workload is clearly visible. Therefore, in order to continue the development of more complex and more advanced image processing techniques, a new direction is necessary. A very interesting family of image segmentation algorithms, which has been gaining a lot of focus in the last three decades, is called Deformable Models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of Deformable Models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes Deformable Models a very ef cient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy and incomplete data. However, despite the large amount of work carried out in this area, Deformable Models still suffer from a number of drawbacks. Those that have been gaining the most focus are e.g. sensitivity to the initial position and shape of the model, sensitivity to noise in the input images and to awed input data, or the need for user supervision over the process. The work described in this thesis aims at addressing the problems of modern image segmentation, which has raised from the combination of above-mentioned factors: the signi cant growth of image volumes sizes, the growth of complexity of image processing algorithms, coupled with the change in processor development and turn towards multi-processing units instead of growing bus speeds and the number of operations per second of a single processing unit. We present our innovative model for 3D image segmentation, called the The Whole Mesh Deformation model, which holds a set of very desired features that successfully address the above-mentioned requirements. Our model has been designed speci cally for execution on parallel architectures and with the purpose of working well with very large 3D images that are created by modern medical acquisition devices. Our solution is based on Deformable Models and is characterized by a very effective and precise segmentation capability. The proposed Whole Mesh Deformation (WMD) model uses a 3D mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times. The model offers a very good ability for topology changes and allows effective parallelization of work ow, which makes it a very good choice for large data-sets. In this thesis we present a precise model description, followed by experiments on arti cial images and real medical data

    Speeding up active mesh segmentation by local termination of nodes.

    Get PDF
    This article outlines a procedure for speeding up segmentation of images using active mesh systems. Active meshes and other deformable models are very popular in image segmentation due to their ability to capture weak or missing boundary information; however, where strong edges exist, computations are still done after mesh nodes have settled on the boundary. This can lead to extra computational time whilst the system continues to deform completed regions of the mesh. We propose a local termination procedure, reducing these unnecessary computations and speeding up segmentation time with minimal loss of quality

    Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration

    Full text link
    We propose an unsupervised deep learning method for atlas based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound

    Hierarchical Object Parsing from Structured Noisy Point Clouds

    Full text link
    Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as Active Shape and Active Appearance models lack the necessary flexibility for this task, while recent approaches such as the Recursive Compositional Models make model simplifications in order to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer, which is a deformation of a hidden PCA shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state of the art parsing errors on two standard datasets without using any intensity information.Comment: 13 pages, 16 figure

    Learning the dynamics and time-recursive boundary detection of deformable objects

    Get PDF
    We propose a principled framework for recursively segmenting deformable objects across a sequence of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac cycle. The approach involves a technique for learning the system dynamics together with methods of particle-based smoothing as well as non-parametric belief propagation on a loopy graphical model capturing the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state estimation. By formulating the problem as one of state estimation, the segmentation at each particular time is based not only on the data observed at that instant, but also on predictions based on past and future boundary estimates. Although the paper focuses on left ventricle segmentation, the method generalizes to temporally segmenting any deformable object
    corecore