8 research outputs found
Extraction of volumetric indices from echocardiography: which deep learning solution for clinical use?
Deep learning-based methods have spearheaded the automatic analysis of
echocardiographic images, taking advantage of the publication of multiple open
access datasets annotated by experts (CAMUS being one of the largest public
databases). However, these models are still considered unreliable by clinicians
due to unresolved issues concerning i) the temporal consistency of their
predictions, and ii) their ability to generalize across datasets. In this
context, we propose a comprehensive comparison between the current best
performing methods in medical/echocardiographic image segmentation, with a
particular focus on temporal consistency and cross-dataset aspects. We
introduce a new private dataset, named CARDINAL, of apical two-chamber and
apical four-chamber sequences, with reference segmentation over the full
cardiac cycle. We show that the proposed 3D nnU-Net outperforms alternative 2D
and recurrent segmentation methods. We also report that the best models trained
on CARDINAL, when tested on CAMUS without any fine-tuning, still manage to
perform competitively with respect to prior methods. Overall, the experimental
results suggest that with sufficient training data, 3D nnU-Net could become the
first automated tool to finally meet the standards of an everyday clinical
device.Comment: 10 pages, accepted for FIMH 2023; camera ready corrections, corrected
acknowledgment
Echocardiography Segmentation with Enforced Temporal Consistency
International audienceConvolutional neural networks (CNN) have demonstrated their ability to segment 2D cardiac ultrasound images. However, despite recent successes according to which the intra-observer variability on end-diastole and end-systole images has been reached, CNNs still struggle to leverage temporal information to provide accurate and temporally consistent segmentation maps across the whole cycle. Such consistency is required to accurately describe the cardiac function, a necessary step in diagnosing many cardiovascular diseases. In this paper, we propose a framework to learn the 2D+time apical long-axis cardiac shape such that the segmented sequences can benefit from temporal and anatomical consistency constraints. Our method is a post-processing that takes as input segmented echocardiographic sequences produced by any state-ofthe-art method and processes it in two steps to (i) identify spatio-temporal inconsistencies according to the overall dynamics of the cardiac sequence and (ii) correct the inconsistencies. The identification and correction of cardiac inconsistencies relies on a constrained autoencoder trained to learn a physiologically interpretable embedding of cardiac shapes, where we can both detect and fix anomalies. We tested our framework on 98 full-cycle sequences from the CAMUS dataset, which are available alongside this paper. Our temporal regularization method not only improves the accuracy of the segmentation across the whole sequences, but also enforces temporal and anatomical consistency
Cardiac Segmentation With Strong Anatomical Guarantees
International audienc
Extraction of volumetric indices from echocardiography: which deep learning solution for clinical use?
International audienceDeep learning-based methods have spearheaded the automatic analysis of echocardiographic images, taking advantage of the publication of multiple open access datasets annotated by experts (CAMUS being one of the largest public databases). However, these models are still considered unreliable by clinicians due to unresolved issues concerning i) the temporal consistency of their predictions, and ii) their ability to generalize across datasets. In this context, we propose a comprehensive comparison between the current best performing methods in medical/echocardiographic image segmentation, with a particular focus on temporal consistency and cross-dataset aspects. We introduce a new private dataset, named CARDINAL, of apical two-chamber and apical four-chamber sequences, with reference segmentation over the full cardiac cycle. We show that the proposed 3D nnU-Net outperforms alternative 2D and recurrent segmentation methods. We also report that the best models trained on CARDINAL, when tested on CAMUS without any fine-tuning, still manage to perform competitively with respect to prior methods. Overall, the experimental results suggest that with sufficient training data, 3D nnU-Net could become the first automated tool to finally meet the standards of an everyday clinical device
Extraction of volumetric indices from echocardiography: which deep learning solution for clinical use?
International audienceDeep learning-based methods have spearheaded the automatic analysis of echocardiographic images, taking advantage of the publication of multiple open access datasets annotated by experts (CAMUS being one of the largest public databases). However, these models are still considered unreliable by clinicians due to unresolved issues concerning i) the temporal consistency of their predictions, and ii) their ability to generalize across datasets. In this context, we propose a comprehensive comparison between the current best performing methods in medical/echocardiographic image segmentation, with a particular focus on temporal consistency and cross-dataset aspects. We introduce a new private dataset, named CARDINAL, of apical two-chamber and apical four-chamber sequences, with reference segmentation over the full cardiac cycle. We show that the proposed 3D nnU-Net outperforms alternative 2D and recurrent segmentation methods. We also report that the best models trained on CARDINAL, when tested on CAMUS without any fine-tuning, still manage to perform competitively with respect to prior methods. Overall, the experimental results suggest that with sufficient training data, 3D nnU-Net could become the first automated tool to finally meet the standards of an everyday clinical device
Neural Teleportation
In this paper, we explore a process called neural teleportation, a mathematical consequence of applying quiver representation theory to neural networks. Neural teleportation teleports a network to a new position in the weight space and preserves its function. This phenomenon comes directly from the definitions of representation theory applied to neural networks and it turns out to be a very simple operation that has remarkable properties. We shed light on the surprising and counter-intuitive consequences neural teleportation has on the loss landscape. In particular, we show that teleportation can be used to explore loss level curves, that it changes the local loss landscape, sharpens global minima and boosts back-propagated gradients at any moment during the learning process