47,912 research outputs found
ICoNIK: Generating Respiratory-Resolved Abdominal MR Reconstructions Using Neural Implicit Representations in k-Space
Motion-resolved reconstruction for abdominal magnetic resonance imaging (MRI)
remains a challenge due to the trade-off between residual motion blurring
caused by discretized motion states and undersampling artefacts. In this work,
we propose to generate blurring-free motion-resolved abdominal reconstructions
by learning a neural implicit representation directly in k-space (NIK). Using
measured sampling points and a data-derived respiratory navigator signal, we
train a network to generate continuous signal values. To aid the regularization
of sparsely sampled regions, we introduce an additional informed correction
layer (ICo), which leverages information from neighboring regions to correct
NIK's prediction. Our proposed generative reconstruction methods, NIK and
ICoNIK, outperform standard motion-resolved reconstruction techniques and
provide a promising solution to address motion artefacts in abdominal MRI
A recurrent neural network for classification of unevenly sampled variable stars
Astronomical surveys of celestial sources produce streams of noisy time
series measuring flux versus time ("light curves"). Unlike in many other
physical domains, however, large (and source-specific) temporal gaps in data
arise naturally due to intranight cadence choices as well as diurnal and
seasonal constraints. With nightly observations of millions of variable stars
and transients from upcoming surveys, efficient and accurate discovery and
classification techniques on noisy, irregularly sampled data must be employed
with minimal human-in-the-loop involvement. Machine learning for inference
tasks on such data traditionally requires the laborious hand-coding of
domain-specific numerical summaries of raw data ("features"). Here we present a
novel unsupervised autoencoding recurrent neural network (RNN) that makes
explicit use of sampling times and known heteroskedastic noise properties. When
trained on optical variable star catalogs, this network produces supervised
classification models that rival other best-in-class approaches. We find that
autoencoded features learned on one time-domain survey perform nearly as well
when applied to another survey. These networks can continue to learn from new
unlabeled observations and may be used in other unsupervised tasks such as
forecasting and anomaly detection.Comment: 23 pages, 14 figures. The published version is at Nature Astronomy
(https://www.nature.com/articles/s41550-017-0321-z). Source code for models,
experiments, and figures at
https://github.com/bnaul/IrregularTimeSeriesAutoencoderPaper (Zenodo Code
DOI: 10.5281/zenodo.1045560
Model based learning for accelerated, limited-view 3D photoacoustic tomography
Recent advances in deep learning for tomographic reconstructions have shown
great potential to create accurate and high quality images with a considerable
speed-up. In this work we present a deep neural network that is specifically
designed to provide high resolution 3D images from restricted photoacoustic
measurements. The network is designed to represent an iterative scheme and
incorporates gradient information of the data fit to compensate for limited
view artefacts. Due to the high complexity of the photoacoustic forward
operator, we separate training and computation of the gradient information. A
suitable prior for the desired image structures is learned as part of the
training. The resulting network is trained and tested on a set of segmented
vessels from lung CT scans and then applied to in-vivo photoacoustic
measurement data
- …