29 research outputs found

    Roto-Translation Covariant Convolutional Networks for Medical Image Analysis

    Full text link
    We propose a framework for rotation and translation covariant deep learning using SE(2)SE(2) group convolutions. The group product of the special Euclidean motion group SE(2)SE(2) describes how a concatenation of two roto-translations results in a net roto-translation. We encode this geometric structure into convolutional neural networks (CNNs) via SE(2)SE(2) group convolutional layers, which fit into the standard 2D CNN framework, and which allow to generically deal with rotated input samples without the need for data augmentation. We introduce three layers: a lifting layer which lifts a 2D (vector valued) image to an SE(2)SE(2)-image, i.e., 3D (vector valued) data whose domain is SE(2)SE(2); a group convolution layer from and to an SE(2)SE(2)-image; and a projection layer from an SE(2)SE(2)-image to a 2D image. The lifting and group convolution layers are SE(2)SE(2) covariant (the output roto-translates with the input). The final projection layer, a maximum intensity projection over rotations, makes the full CNN rotation invariant. We show with three different problems in histopathology, retinal imaging, and electron microscopy that with the proposed group CNNs, state-of-the-art performance can be achieved, without the need for data augmentation by rotation and with increased performance compared to standard CNNs that do rely on augmentation.Comment: 8 pages, 2 figures, 1 table, accepted at MICCAI 201

    Domain-Adversarial Learning for Multi-Centre, Multi-Vendor, and Multi-Disease Cardiac MR Image Segmentation

    Get PDF
    Cine cardiac magnetic resonance (CMR) has become the gold standard for the non-invasive evaluation of cardiac function. In particular, it allows the accurate quantification of functional parameters including the chamber volumes and ejection fraction. Deep learning has shown the potential to automate the requisite cardiac structure segmentation. However, the lack of robustness of deep learning models has hindered their widespread clinical adoption. Due to differences in the data characteristics, neural networks trained on data from a specific scanner are not guaranteed to generalise well to data acquired at a different centre or with a different scanner. In this work, we propose a principled solution to the problem of this domain shift. Domain-adversarial learning is used to train a domain-invariant 2D U-Net using labelled and unlabelled data. This approach is evaluated on both seen and unseen domains from the M\&Ms challenge dataset and the domain-adversarial approach shows improved performance as compared to standard training. Additionally, we show that the domain information cannot be recovered from the learned features.Comment: Accepted at the STACOM workshop at MICCAI 202

    Combination of searches for heavy spin-1 resonances using 139 fb−1 of proton-proton collision data at s = 13 TeV with the ATLAS detector

    Get PDF
    A combination of searches for new heavy spin-1 resonances decaying into different pairings of W, Z, or Higgs bosons, as well as directly into leptons or quarks, is presented. The data sample used corresponds to 139 fb−1 of proton-proton collisions at = 13 TeV collected during 2015–2018 with the ATLAS detector at the CERN Large Hadron Collider. Analyses selecting quark pairs (qq, bb, , and tb) or third-generation leptons (τν and ττ) are included in this kind of combination for the first time. A simplified model predicting a spin-1 heavy vector-boson triplet is used. Cross-section limits are set at the 95% confidence level and are compared with predictions for the benchmark model. These limits are also expressed in terms of constraints on couplings of the heavy vector-boson triplet to quarks, leptons, and the Higgs boson. The complementarity of the various analyses increases the sensitivity to new physics, and the resulting constraints are stronger than those from any individual analysis considered. The data exclude a heavy vector-boson triplet with mass below 5.8 TeV in a weakly coupled scenario, below 4.4 TeV in a strongly coupled scenario, and up to 1.5 TeV in the case of production via vector-boson fusion

    Progressively growing convolutional networks for end-to-end deformable image registration

    Get PDF
    Deformable image registration is often a slow process when using conventional methods. To speed up deformable registration, there is growing interest in using convolutional neural networks. They are comparatively fast and can be trained to estimate full-resolution deformation fields directly from pairs of images. Because deep learning-based registration methods often require rigid or affine pre-registration of the images, they do not perform true end-to-end image registration. To address this, we propose a progressive training method for end-to-end image registration with convolutional networks. The network is first trained to find large deformations at a low resolution using a smaller part of the full architecture. The network is then gradually expanded during training by adding higher resolution layers that allow the network to learn more fine-grained deformations from higher resolution data. By starting at a lower resolution, the network is able to learn larger deformations more quickly at the start of training, making pre-registration redundant. We apply this method to pulmonary CT data, and use it to register inhalation to exhalation images. We train the network using the CREATIS pulmonary CT data set, and apply the trained network to register the DIRLAB pulmonary CT data set. By computing the target registration error at corresponding landmarks we show that the error for end-to-end registration is significantly reduced by using progressive training, while retaining sub-second registration times

    Progressively trained convolutional neural networks for deformable image registration

    Get PDF
    \u3cp\u3eDeep learning-based methods for deformable image registration are attractive alternatives to conventional registration methods because of their short registration times. However, these methods often fail to estimate larger displacements in complex deformation fields, for which a multi-resolution strategy is required. In this article, we propose to train neural networks progressively to address this problem. Instead of training a large convolutional neural network on the registration task all at once, we initially train smaller versions of the network on lower resolution versions of the images and deformation fields. During training, we progressively expand the network with additional layers that are trained on higher resolution data. We show that this way of training allows a network to learn larger displacements without sacrificing registration accuracy and that the resulting network is less sensitive to large misregistrations compared to training the full network all at once. We generate a large number of ground truth example data by applying random synthetic transformations to a training set of images, and test the network on the problem of intrapatient lung CT registration. We analyze the learned representations in the progressively growing network to assess how the progressive learning strategy influences training. Finally, we show that a progressive training procedure leads to improved registration accuracy when learning large and complex deformations.\u3c/p\u3

    Learning domain-invariant representations of histological images

    No full text
    Histological images present high appearance variability due to inconsistent latent parameters related to the preparation and scanning procedure of histological slides, as well as the inherent biological variability of tissues. Machine-learning models are trained with images from a limited set of domains, and are expected to generalize to images from unseen domains. Methodological design choices have to be made in order to yield domain invariance and proper generalization. In digital pathology, standard approaches focus either on ad-hoc normalization of the latent parameters based on prior knowledge, such as staining normalization, or aim at anticipating new variations of these parameters via data augmentation. Since every histological image originates from a unique data distribution, we propose to consider every histological slide of the training data as a domain and investigated the alternative approach of domain-adversarial training to learn features that are invariant to this available domain information. We carried out a comparative analysis with staining normalization and data augmentation on two different tasks: generalization to images acquired in unseen pathology labs for mitosis detection and generalization to unseen organs for nuclei segmentation. We report that the utility of each method depends on the type of task and type of data variability present at training and test time. The proposed framework for domain-adversarial training is able to improve generalization performances on top of conventional methods

    Deformable image registration using convolutional neural networks

    No full text
    \u3cp\u3eDeformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.\u3c/p\u3

    Domain-adversarial neural networks to address the appearance variability of histopathology images

    No full text
    \u3cp\u3ePreparing and scanning histopathology slides consists of several steps, each with a multitude of parameters. The parameters can vary between pathology labs and within the same lab over time, resulting in significant variability of the tissue appearance that hampers the generalization of automatic image analysis methods. Typically, this is addressed with ad-hoc approaches such as staining normalization that aim to reduce the appearance variability. In this paper, we propose a systematic solution based on domain-adversarial neural networks. We hypothesize that removing the domain information from the model representation leads to better generalization. We tested our hypothesis for the problem of mitosis detection in breast cancer histopathology images and made a comparative analysis with two other approaches. We show that combining color augmentation with domain-adversarial training is a better alternative than standard approaches to improve the generalization of deep learning methods.\u3c/p\u3

    Inferring a third spatial dimension from 2D histological images

    No full text
    \u3cp\u3eHistological images are obtained by transmitting light through a tissue specimen that has been stained in order to produce contrast. This process results in 2D images of the specimen that has a three-dimensional structure. In this paper, we propose a method to infer how the stains are distributed in the direction perpendicular to the surface of the slide for a given 2D image in order to obtain a 3D representation of the tissue. This inference is achieved by decomposition of the staining concentration maps under constraints that ensure realistic decomposition and reconstruction of the original 2D images. Our study shows that it is possible to generate realistic 3D images making this method a potential tool for data augmentation when training deep learning models.\u3c/p\u3

    Roto-translation covariant convolutional networks for medical image analysis

    Get PDF
    We propose a framework for rotation and translation covariant deep learning using SE(2) group convolutions. The group product of the special Euclidean motion group SE(2) describes how a concatenation of two roto-translations results in a net roto-translation. We encode this geometric structure into convolutional neural networks (CNNs) via SE(2) group convolutional layers, which fit into the standard 2D CNN framework, and which allow to generically deal with rotated input samples without the need for data augmentation. We introduce three layers: a lifting layer which lifts a 2D (vector valued) image to an SE(2)-image, i.e., 3D (vector valued) data whose domain is SE(2); a group convolution layer from and to an SE(2)-image; and a projection layer from an SE(2)-image to a 2D image. The lifting and group convolution layers are SE(2) covariant (the output roto-translates with the input). The final projection layer, a maximum intensity projection over rotations, makes the full CNN rotation invariant. We show with three different problems in histopathology, retinal imaging, and electron microscopy that with the proposed group CNNs, state-of-the-art performance can be achieved, without the need for data augmentation by rotation and with increased performance compared to standard CNNs that do rely on augmentation
    corecore