19 research outputs found

    Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning

    Get PDF
    Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Longitudinal monitoring of metastatic breast cancer through PET image registration and segmentation based on trained and untrained networks

    No full text
    Le cancer du sein métastasé nécessite un suivi régulier. Au cours du traitement, des images de TEP- scan sont régulièrement acquises puis interprétées selon des recommandations telles que PERCIST pour décider d’un éventuel ajustement thérapeutique. Cependant, PERCIST se concentre seulement sur la lésion présentant l’activité tumorale la plus élevée. L’objectif de cette thèse est de développer des outils permettant de prendre en compte toutes les zones actives à l’aide du TEP-scan, afin de suivre au mieux l’évolution du cancer du sein. Notre première contribution est une méthode pour la segmentation automatique d’organes actifs (cerveau, vessie). Notre deuxième contribution formule la segmentation de lésions sur les images de suivi comme un problème de recalage d’images. Pour résoudre le recalage longitudinal d’images TEP corps entier, nous avons développé une nouvelle méthode nommée MIRRBA (Medical Image Registration Regularized By Architecture), qui combine les avantages des méthodes conventionnelles et de celles utilisant l’apprentissage profond. Nous avons validé trois approches (conventionnelle, apprentissage profond et MIRRBA) sur une base de données privées d’images TEP longitudinales obtenues dans le contexte de l’étude EPICURE. Finalement, notre troisième contribution est l’évaluation de biomarqueurs extraits des segmentations de lésions obtenues grâce au recalage. Nous proposons donc un nouvel outil automatisé pour améliorer suivi du cancer du sein métastasé.Metastatic breast cancer requires constant monitoring. During follow-up care, PET images are regularly acquired and interpreted according to specific guidelines, such as PERCIST, to decide whether or not the treatment should be adapted. However, PERCIST focuses only on one lesion representing tumor burden. The objective of this PhD thesis is to assist physicians monitormetastatic breast cancer patients with longitudinal PET images and improve tumor evaluation by providing them tools to consider all regions showing a high uptake. Our first contribution is a method for the automatic segmentation of active organs (brain, bladder, etc). Our second contribution formulates the segmentation of lesions in the followup examination as an image registration problem.The longitudinal full-body PET image registration problem is addressed, in this thesis, with our novel method called MIRRBA (Medical Image Registration Regularized By Architecture), which combines the strengths of both conventional and DL-based approaches within a Deep Image Prior (DIP) setup. We validated the three types of approaches (conventional, DL and MIRRBA) on a private longitudinalPET dataset obtained in the context of the EPICURE project. Finally, the third contribution is the evaluation of the biomarkers extracted from lesion segmentations obtained from the lesion registration step. We propose a new tool for the monitoring of metastatic breast cancer

    Suivi de l'évolution du cancer du sein métastasé via le recalage et la segmentation d'images TEP en utilisant des réseaux entraînés et non-entraînés

    No full text
    Metastatic breast cancer requires constant monitoring. During follow-up care, PET images are regularly acquired and interpreted according to specific guidelines, such as PERCIST, to decide whether or not the treatment should be adapted. However, PERCIST focuses only on one lesion representing tumor burden. The objective of this PhD thesis is to assist physicians monitormetastatic breast cancer patients with longitudinal PET images and improve tumor evaluation by providing them tools to consider all regions showing a high uptake. Our first contribution is a method for the automatic segmentation of active organs (brain, bladder, etc). Our second contribution formulates the segmentation of lesions in the followup examination as an image registration problem.The longitudinal full-body PET image registration problem is addressed, in this thesis, with our novel method called MIRRBA (Medical Image Registration Regularized By Architecture), which combines the strengths of both conventional and DL-based approaches within a Deep Image Prior (DIP) setup. We validated the three types of approaches (conventional, DL and MIRRBA) on a private longitudinalPET dataset obtained in the context of the EPICURE project. Finally, the third contribution is the evaluation of the biomarkers extracted from lesion segmentations obtained from the lesion registration step. We propose a new tool for the monitoring of metastatic breast cancer.Le cancer du sein métastasé nécessite un suivi régulier. Au cours du traitement, des images de TEP- scan sont régulièrement acquises puis interprétées selon des recommandations telles que PERCIST pour décider d’un éventuel ajustement thérapeutique. Cependant, PERCIST se concentre seulement sur la lésion présentant l’activité tumorale la plus élevée. L’objectif de cette thèse est de développer des outils permettant de prendre en compte toutes les zones actives à l’aide du TEP-scan, afin de suivre au mieux l’évolution du cancer du sein. Notre première contribution est une méthode pour la segmentation automatique d’organes actifs (cerveau, vessie). Notre deuxième contribution formule la segmentation de lésions sur les images de suivi comme un problème de recalage d’images. Pour résoudre le recalage longitudinal d’images TEP corps entier, nous avons développé une nouvelle méthode nommée MIRRBA (Medical Image Registration Regularized By Architecture), qui combine les avantages des méthodes conventionnelles et de celles utilisant l’apprentissage profond. Nous avons validé trois approches (conventionnelle, apprentissage profond et MIRRBA) sur une base de données privées d’images TEP longitudinales obtenues dans le contexte de l’étude EPICURE. Finalement, notre troisième contribution est l’évaluation de biomarqueurs extraits des segmentations de lésions obtenues grâce au recalage. Nous proposons donc un nouvel outil automatisé pour améliorer suivi du cancer du sein métastasé

    Suivi de l'évolution du cancer du sein métastasé via le recalage et la segmentation d'images TEP en utilisant des réseaux entraînés et non-entraînés

    No full text
    Metastatic breast cancer requires constant monitoring. During follow-up care, PET images are regularly acquired and interpreted according to specific guidelines, such as PERCIST, to decide whether or not the treatment should be adapted. However, PERCIST focuses only on one lesion representing tumor burden. The objective of this PhD thesis is to assist physicians monitormetastatic breast cancer patients with longitudinal PET images and improve tumor evaluation by providing them tools to consider all regions showing a high uptake. Our first contribution is a method for the automatic segmentation of active organs (brain, bladder, etc). Our second contribution formulates the segmentation of lesions in the followup examination as an image registration problem.The longitudinal full-body PET image registration problem is addressed, in this thesis, with our novel method called MIRRBA (Medical Image Registration Regularized By Architecture), which combines the strengths of both conventional and DL-based approaches within a Deep Image Prior (DIP) setup. We validated the three types of approaches (conventional, DL and MIRRBA) on a private longitudinalPET dataset obtained in the context of the EPICURE project. Finally, the third contribution is the evaluation of the biomarkers extracted from lesion segmentations obtained from the lesion registration step. We propose a new tool for the monitoring of metastatic breast cancer.Le cancer du sein métastasé nécessite un suivi régulier. Au cours du traitement, des images de TEP- scan sont régulièrement acquises puis interprétées selon des recommandations telles que PERCIST pour décider d’un éventuel ajustement thérapeutique. Cependant, PERCIST se concentre seulement sur la lésion présentant l’activité tumorale la plus élevée. L’objectif de cette thèse est de développer des outils permettant de prendre en compte toutes les zones actives à l’aide du TEP-scan, afin de suivre au mieux l’évolution du cancer du sein. Notre première contribution est une méthode pour la segmentation automatique d’organes actifs (cerveau, vessie). Notre deuxième contribution formule la segmentation de lésions sur les images de suivi comme un problème de recalage d’images. Pour résoudre le recalage longitudinal d’images TEP corps entier, nous avons développé une nouvelle méthode nommée MIRRBA (Medical Image Registration Regularized By Architecture), qui combine les avantages des méthodes conventionnelles et de celles utilisant l’apprentissage profond. Nous avons validé trois approches (conventionnelle, apprentissage profond et MIRRBA) sur une base de données privées d’images TEP longitudinales obtenues dans le contexte de l’étude EPICURE. Finalement, notre troisième contribution est l’évaluation de biomarqueurs extraits des segmentations de lésions obtenues grâce au recalage. Nous proposons donc un nouvel outil automatisé pour améliorer suivi du cancer du sein métastasé

    Caracterización del comportamiento mecánico de la pared del aneurisma aórtico abdominal (AAA) mediante un modelo de partículas

    Full text link
    Caracterización del comportamiento mecánico de la pared del aneurisma aórtico abdominal (AAA) mediante un modelo de partículasFourcade, CMA. (2018). Caracterización del comportamiento mecánico de la pared del aneurisma aórtico abdominal (AAA) mediante un modelo de partículas. http://hdl.handle.net/10251/110048TFG

    Active Organs Segmentation in Metastatic Breast Cancer Images combining Superpixels and Deep Learning Methods

    No full text
    National audienceHypothesisIn the clinical follow-up of metastatic breast cancer patients, semi-automatic measurements are performed on 18FDG PET/CT images to monitor the evolution of the main metastatic sites. Apart from being time-consuming and prone to subjective approximations, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake. In this work, we develop and compare fully automatic deep learning-based methods segmenting the main active organs (brain, heart, bladder), from full-body PET images.MethodsWe combine deep learning-based approaches with superpixels segmentation methods. In particular, we integrate a superpixel SLIC segmentation at different depths of a convolutional neural network, i.e. as input and within the optimization process. Superpixels reduce the resolution of the images, keeping sharp the boundaries of the larger target organs while the lesions, mostly smaller, are blurred. Results are compared with a deep learning segmentation network alone. The methods are cross-validated on full-body PET images of 36 patients from the ongoing EPICUREseinmeta study. The similarity between the manually defined ground truth masks of the organs and the results is evaluated with the Dice score. Moreover, these methods being preliminary to tumor segmentation, the precision of the networks is defined by monitoring the number of segmented voxels labelled as “active organ”, but belonging to a lesion.ResultsAlthough the methods present similar high Dice scores (0.96 ± 0.006), the ones using superpixels present a higher precision (on average 6, 16 and 27 selected voxels belonging to a tumor, for the CNN integrating superpixels in input, in optimization and not using them, respectively).ConclusionCombining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesions. This improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis

    Comparison between threshold-based and deep learning-based bone segmentation on whole-body CT images

    No full text
    International audienceObjectives: Bone segmentation can help bone disease diagnosis or post treatment assessment but manual segmentation is a time consuming and tedious task in clinical practice. In this work, three automatic methods to segment bone structures on whole body CT images were compared. Methods: A threshold-based approach with morphological operations and two deep learning methods using a 3D U-Net with different losses, one with a cross entropy/Dice loss and the second with a Hausdorff Distance/Dice loss, were developed. Ground truth bone segmentations were generated by manually correcting the results obtained with the threshold based method. The automatic bone segmentations were evaluated using a Dice score and Hausdorff distance. Visual evaluation was also performed by a medical expert. Results: Dice scores of 0.953, 0.986 and 0.978 were achieved for the Threshold-based method and the two deep learning methods, respectively. Visual evaluation showed that the deep learning method with a Hausdorff Distance/Dice loss performed the best
    corecore