681 research outputs found
Lung Segmentation from Chest X-rays using Variational Data Imputation
Pulmonary opacification is the inflammation in the lungs caused by many
respiratory ailments, including the novel corona virus disease 2019 (COVID-19).
Chest X-rays (CXRs) with such opacifications render regions of lungs
imperceptible, making it difficult to perform automated image analysis on them.
In this work, we focus on segmenting lungs from such abnormal CXRs as part of a
pipeline aimed at automated risk scoring of COVID-19 from CXRs. We treat the
high opacity regions as missing data and present a modified CNN-based image
segmentation network that utilizes a deep generative model for data imputation.
We train this model on normal CXRs with extensive data augmentation and
demonstrate the usefulness of this model to extend to cases with extreme
abnormalities.Comment: Accepted to be presented at the first Workshop on the Art of Learning
with Missing Values (Artemiss) hosted by the 37th International Conference on
Machine Learning (ICML). Source code, training data and the trained models
are available here: https://github.com/raghavian/lungVAE
Covid-19 Diagnosis Based on CT Images Through Deep Learning and Data Augmentation
Coronavirus disease 2019(Covid-19) has made people around the world suffer. And there are many researchers make efforts on deep learning methods based on CT imgaes, but the limitation of  this work is the lackage of the dataset, which is not easy to obtain. In this study, we try to use data augmentation to compensate this weakness. In the first part, we use traditional DenseNet-169, and the result shows that data augmentation can help improve the calculating speed and the accuracy. In the second part, we combine Self-trans and DenseNet-169, and the result shows that when doing data augmentation, many model performance metrics have been improved. In the third part, we use UNet++, which reaches accuracy of 0.8645. Apart from this, we think GAN and CNN may also make difference
Mechanical MNIST: A benchmark dataset for mechanical metamodels
Metamodels, or models of models, map defined model inputs to defined model outputs. Typically, metamodels are constructed by generating a dataset through sampling a direct model and training a machine learning algorithm to predict a limited number of model outputs from varying model inputs. When metamodels are constructed to be computationally cheap, they are an invaluable tool for applications ranging from topology optimization, to uncertainty quantification, to multi-scale simulation. By nature, a given metamodel will be tailored to a specific dataset. However, the most pragmatic metamodel type and structure will often be general to larger classes of problems. At present, the most pragmatic metamodel selection for dealing with mechanical data has not been thoroughly explored. Drawing inspiration from the benchmark datasets available to the computer vision research community, we introduce a benchmark data set (Mechanical MNIST) for constructing metamodels of heterogeneous material undergoing large deformation. We then show examples of how our benchmark dataset can be used, and establish baseline metamodel performance. Because our dataset is readily available, it will enable the direct quantitative comparison between different metamodeling approaches in a pragmatic manner. We anticipate that it will enable the broader community of researchers to develop improved metamodeling techniques for mechanical data that will surpass the baseline performance that we show here.Accepted manuscrip
Style Augmentation improves Medical Image Segmentation
Due to the limitation of available labeled data, medical image segmentation
is a challenging task for deep learning. Traditional data augmentation
techniques have been shown to improve segmentation network performances by
optimizing the usage of few training examples. However, current augmentation
approaches for segmentation do not tackle the strong texture bias of
convolutional neural networks, observed in several studies. This work shows on
the MoNuSeg dataset that style augmentation, which is already used in
classification tasks, helps reducing texture over-fitting and improves
segmentation performance
Landmarks Augmentation with Manifold-Barycentric Oversampling
The training of Generative Adversarial Networks (GANs) requires a large
amount of data, stimulating the development of new augmentation methods to
alleviate the challenge. Oftentimes, these methods either fail to produce
enough new data or expand the dataset beyond the original manifold. In this
paper, we propose a new augmentation method that guarantees to keep the new
data within the original data manifold thanks to the optimal transport theory.
The proposed algorithm finds cliques in the nearest-neighbors graph and, at
each sampling iteration, randomly draws one clique to compute the Wasserstein
barycenter with random uniform weights. These barycenters then become the new
natural-looking elements that one could add to the dataset. We apply this
approach to the problem of landmarks detection and augment the available
annotation in both unpaired and in semi-supervised scenarios. Additionally, the
idea is validated on cardiac data for the task of medical segmentation. Our
approach reduces the overfitting and improves the quality metrics beyond the
original data outcome and beyond the result obtained with popular modern
augmentation methods.Comment: 11 pages, 4 figures, 3 tables. I.B. and N.B. contributed equally.
D.V.D. is the corresponding autho
- …