6,907 research outputs found
Analysis of Deep Complex-Valued Convolutional Neural Networks for MRI Reconstruction
Many real-world signal sources are complex-valued, having real and imaginary
components. However, the vast majority of existing deep learning platforms and
network architectures do not support the use of complex-valued data. MRI data
is inherently complex-valued, so existing approaches discard the richer
algebraic structure of the complex data. In this work, we investigate
end-to-end complex-valued convolutional neural networks - specifically, for
image reconstruction in lieu of two-channel real-valued networks. We apply this
to magnetic resonance imaging reconstruction for the purpose of accelerating
scan times and determine the performance of various promising complex-valued
activation functions. We find that complex-valued CNNs with complex-valued
convolutions provide superior reconstructions compared to real-valued
convolutions with the same number of trainable parameters, over a variety of
network architectures and datasets
Cell Detection in Microscopy Images with Deep Convolutional Neural Network and Compressed Sensing
The ability to automatically detect certain types of cells or cellular
subunits in microscopy images is of significant interest to a wide range of
biomedical research and clinical practices. Cell detection methods have evolved
from employing hand-crafted features to deep learning-based techniques. The
essential idea of these methods is that their cell classifiers or detectors are
trained in the pixel space, where the locations of target cells are labeled. In
this paper, we seek a different route and propose a convolutional neural
network (CNN)-based cell detection method that uses encoding of the output
pixel space. For the cell detection problem, the output space is the sparsely
labeled pixel locations indicating cell centers. We employ random projections
to encode the output space to a compressed vector of fixed dimension. Then, CNN
regresses this compressed vector from the input pixels. Furthermore, it is
possible to stably recover sparse cell locations on the output pixel space from
the predicted compressed vector using -norm optimization. In the past,
output space encoding using compressed sensing (CS) has been used in
conjunction with linear and non-linear predictors. To the best of our
knowledge, this is the first successful use of CNN with CS-based output space
encoding. We made substantial experiments on several benchmark datasets, where
the proposed CNN + CS framework (referred to as CNNCS) achieved the highest or
at least top-3 performance in terms of F1-score, compared with other
state-of-the-art methods
Highly Scalable Image Reconstruction using Deep Neural Networks with Bandpass Filtering
To increase the flexibility and scalability of deep neural networks for image
reconstruction, a framework is proposed based on bandpass filtering. For many
applications, sensing measurements are performed indirectly. For example, in
magnetic resonance imaging, data are sampled in the frequency domain. The
introduction of bandpass filtering enables leveraging known imaging physics
while ensuring that the final reconstruction is consistent with actual
measurements to maintain reconstruction accuracy. We demonstrate this flexible
architecture for reconstructing subsampled datasets of MRI scans. The resulting
high subsampling rates increase the speed of MRI acquisitions and enable the
visualization rapid hemodynamics.Comment: 9 pages, 10 figure
Deep Convolutional Compressed Sensing for LiDAR Depth Completion
In this paper we consider the problem of estimating a dense depth map from a
set of sparse LiDAR points. We use techniques from compressed sensing and the
recently developed Alternating Direction Neural Networks (ADNNs) to create a
deep recurrent auto-encoder for this task. Our architecture internally performs
an algorithm for extracting multi-level convolutional sparse codes from the
input which are then used to make a prediction. Our results demonstrate that
with only two layers and 1800 parameters we are able to out perform all
previously published results, including deep networks with orders of magnitude
more parameters
One-dimensional Deep Image Prior for Time Series Inverse Problems
We extend the Deep Image Prior (DIP) framework to one-dimensional signals.
DIP is using a randomly initialized convolutional neural network (CNN) to solve
linear inverse problems by optimizing over weights to fit the observed
measurements. Our main finding is that properly tuned one-dimensional
convolutional architectures provide an excellent Deep Image Prior for various
types of temporal signals including audio, biological signals, and sensor
measurements. We show that our network can be used in a variety of recovery
tasks including missing value imputation, blind denoising, and compressed
sensing from random Gaussian projections. The key challenge is how to avoid
overfitting by carefully tuning early stopping, total variation, and weight
decay regularization. Our method requires up to 4 times fewer measurements than
Lasso and outperforms NLM-VAMP for random Gaussian measurements on audio
signals, has similar imputation performance to a Kalman state-space model on a
variety of data, and outperforms wavelet filtering in removing additive noise
from air-quality sensor readings
A Deep Information Sharing Network for Multi-contrast Compressed Sensing MRI Reconstruction
In multi-contrast magnetic resonance imaging (MRI), compressed sensing theory
can accelerate imaging by sampling fewer measurements within each contrast. The
conventional optimization-based models suffer several limitations: strict
assumption of shared sparse support, time-consuming optimization and "shallow"
models with difficulties in encoding the rich patterns hiding in massive MRI
data. In this paper, we propose the first deep learning model for
multi-contrast MRI reconstruction. We achieve information sharing through
feature sharing units, which significantly reduces the number of parameters.
The feature sharing unit is combined with a data fidelity unit to comprise an
inference block. These inference blocks are cascaded with dense connections,
which allows for information transmission across different depths of the
network efficiently. Our extensive experiments on various multi-contrast MRI
datasets show that proposed model outperforms both state-of-the-art
single-contrast and multi-contrast MRI methods in accuracy and efficiency. We
show the improved reconstruction quality can bring great benefits for the later
medical image analysis stage. Furthermore, the robustness of the proposed model
to the non-registration environment shows its potential in real MRI
applications.Comment: 13 pages, 16 figures, 3 table
Deep Learning Methods for Parallel Magnetic Resonance Image Reconstruction
Following the success of deep learning in a wide range of applications,
neural network-based machine learning techniques have received interest as a
means of accelerating magnetic resonance imaging (MRI). A number of ideas
inspired by deep learning techniques from computer vision and image processing
have been successfully applied to non-linear image reconstruction in the spirit
of compressed sensing for both low dose computed tomography and accelerated
MRI. The additional integration of multi-coil information to recover missing
k-space lines in the MRI reconstruction process, is still studied less
frequently, even though it is the de-facto standard for currently used
accelerated MR acquisitions. This manuscript provides an overview of the recent
machine learning approaches that have been proposed specifically for improving
parallel imaging. A general background introduction to parallel MRI is given
that is structured around the classical view of image space and k-space based
methods. Both linear and non-linear methods are covered, followed by a
discussion of recent efforts to further improve parallel imaging using machine
learning, and specifically using artificial neural networks. Image-domain based
techniques that introduce improved regularizers are covered as well as k-space
based methods, where the focus is on better interpolation strategies using
neural networks. Issues and open problems are discussed as well as recent
efforts for producing open datasets and benchmarks for the community.Comment: 14 pages, 7 figure
CRDN: Cascaded Residual Dense Networks for Dynamic MR Imaging with Edge-enhanced Loss Constraint
Dynamic magnetic resonance (MR) imaging has generated great research
interest, as it can provide both spatial and temporal information for clinical
diagnosis. However, slow imaging speed or long scanning time is still one of
the challenges for dynamic MR imaging. Most existing methods reconstruct
Dynamic MR images from incomplete k-space data under the guidance of compressed
sensing (CS) or low rank theory, which suffer from long iterative
reconstruction time. Recently, deep learning has shown great potential in
accelerating dynamic MR. Our previous work proposed a dynamic MR imaging method
with both k-space and spatial prior knowledge integrated via multi-supervised
network training. Nevertheless, there was still a certain degree of smooth in
the reconstructed images at high acceleration factors. In this work, we propose
cascaded residual dense networks for dynamic MR imaging with edge-enhance loss
constraint, dubbed as CRDN. Specifically, the cascaded residual dense networks
fully exploit the hierarchical features from all the convolutional layers with
both local and global feature fusion. We further utilize the total variation
(TV) loss function, which has the edge enhancement properties, for training the
networks
DeepIoT: Compressing Deep Neural Network Structures for Sensing Systems with a Compressor-Critic Framework
Recent advances in deep learning motivate the use of deep neutral networks in
sensing applications, but their excessive resource needs on constrained
embedded devices remain an important impediment. A recently explored solution
space lies in compressing (approximating or simplifying) deep neural networks
in some manner before use on the device. We propose a new compression solution,
called DeepIoT, that makes two key contributions in that space. First, unlike
current solutions geared for compressing specific types of neural networks,
DeepIoT presents a unified approach that compresses all commonly used deep
learning structures for sensing applications, including fully-connected,
convolutional, and recurrent neural networks, as well as their combinations.
Second, unlike solutions that either sparsify weight matrices or assume linear
structure within weight matrices, DeepIoT compresses neural network structures
into smaller dense matrices by finding the minimum number of non-redundant
hidden elements, such as filters and dimensions required by each layer, while
keeping the performance of sensing applications the same. Importantly, it does
so using an approach that obtains a global view of parameter redundancies,
which is shown to produce superior compression. We conduct experiments with
five different sensing-related tasks on Intel Edison devices. DeepIoT
outperforms all compared baseline algorithms with respect to execution time and
energy consumption by a significant margin. It reduces the size of deep neural
networks by 90% to 98.9%. It is thus able to shorten execution time by 71.4% to
94.5%, and decrease energy consumption by 72.2% to 95.7%. These improvements
are achieved without loss of accuracy. The results underscore the potential of
DeepIoT for advancing the exploitation of deep neural networks on
resource-constrained embedded devices.Comment: Published in SenSys2017. Code is available on
https://github.com/yscacaca/DeepIo
Compressed Sensing with Deep Image Prior and Learned Regularization
We propose a novel method for compressed sensing recovery using untrained
deep generative models. Our method is based on the recently proposed Deep Image
Prior (DIP), wherein the convolutional weights of the network are optimized to
match the observed measurements. We show that this approach can be applied to
solve any differentiable linear inverse problem, outperforming previous
unlearned methods. Unlike various learned approaches based on generative
models, our method does not require pre-training over large datasets. We
further introduce a novel learned regularization technique, which incorporates
prior information on the network weights. This reduces reconstruction error,
especially for noisy measurements. Finally, we prove that, using the DIP
optimization approach, moderately overparameterized single-layer networks can
perfectly fit any signal despite the non-convex nature of the fitting problem.
This theoretical result provides justification for early stopping
- …