182 research outputs found
HYDRA: Hybrid Deep Magnetic Resonance Fingerprinting
Purpose: Magnetic resonance fingerprinting (MRF) methods typically rely on
dictio-nary matching to map the temporal MRF signals to quantitative tissue
parameters. Such approaches suffer from inherent discretization errors, as well
as high computational complexity as the dictionary size grows. To alleviate
these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting
approach, referred to as HYDRA.
Methods: HYDRA involves two stages: a model-based signature restoration phase
and a learning-based parameter restoration phase. Signal restoration is
implemented using low-rank based de-aliasing techniques while parameter
restoration is performed using a deep nonlocal residual convolutional neural
network. The designed network is trained on synthesized MRF data simulated with
the Bloch equations and fast imaging with steady state precession (FISP)
sequences. In test mode, it takes a temporal MRF signal as input and produces
the corresponding tissue parameters.
Results: We validated our approach on both synthetic data and anatomical data
generated from a healthy subject. The results demonstrate that, in contrast to
conventional dictionary-matching based MRF techniques, our approach
significantly improves inference speed by eliminating the time-consuming
dictionary matching operation, and alleviates discretization errors by
outputting continuous-valued parameters. We further avoid the need to store a
large dictionary, thus reducing memory requirements.
Conclusions: Our approach demonstrates advantages in terms of inference
speed, accuracy and storage requirements over competing MRF method
Compressed Sensing in Resource-Constrained Environments: From Sensing Mechanism Design to Recovery Algorithms
Compressed Sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. It is promising that CS can be utilized in environments where the signal acquisition process is extremely difficult or costly, e.g., a resource-constrained environment like the smartphone platform, or a band-limited environment like visual sensor network (VSNs). There are several challenges to perform sensing due to the characteristic of these platforms, including, for example, needing active user involvement, computational and storage limitations and lower transmission capabilities. This dissertation focuses on the study of CS in resource-constrained environments.
First, we try to solve the problem on how to design sensing mechanisms that could better adapt to the resource-limited smartphone platform. We propose the compressed phone sensing (CPS) framework where two challenging issues are studied, the energy drainage issue due to continuous sensing which may impede the normal functionality of the smartphones and the requirement of active user inputs for data collection that may place a high burden on the user.
Second, we propose a CS reconstruction algorithm to be used in VSNs for recovery of frames/images. An efficient algorithm, NonLocal Douglas-Rachford (NLDR), is developed. NLDR takes advantage of self-similarity in images using nonlocal means (NL) filtering. We further formulate the nonlocal estimation as the low-rank matrix approximation problem and solve the constrained optimization problem using Douglas-Rachford splitting method.
Third, we extend the NLDR algorithm to surveillance video processing in VSNs and propose recursive Low-rank and Sparse estimation through Douglas-Rachford splitting (rLSDR) method for recovery of the video frame into a low-rank background component and sparse component that corresponds to the moving object. The spatial and temporal low-rank features of the video frame, e.g., the nonlocal similar patches within the single video frame and the low-rank background component residing in multiple frames, are successfully exploited
A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems
Non-Local Total Variation (NLTV) has emerged as a useful tool in variational
methods for image recovery problems. In this paper, we extend the NLTV-based
regularization to multicomponent images by taking advantage of the Structure
Tensor (ST) resulting from the gradient of a multicomponent image. The proposed
approach allows us to penalize the non-local variations, jointly for the
different components, through various matrix norms with .
To facilitate the choice of the hyper-parameters, we adopt a constrained convex
optimization approach in which we minimize the data fidelity term subject to a
constraint involving the ST-NLTV regularization. The resulting convex
optimization problem is solved with a novel epigraphical projection method.
This formulation can be efficiently implemented thanks to the flexibility
offered by recent primal-dual proximal algorithms. Experiments are carried out
for multispectral and hyperspectral images. The results demonstrate the
interest of introducing a non-local structure tensor regularization and show
that the proposed approach leads to significant improvements in terms of
convergence speed over current state-of-the-art methods
Deep Hyperspectral Prior: Denoising, Inpainting, Super-Resolution
Deep learning algorithms have demonstrated state-of-the-art performance in
various tasks of image restoration. This was made possible through the ability
of CNNs to learn from large exemplar sets. However, the latter becomes an issue
for hyperspectral image processing where datasets commonly consist of just a
few images. In this work, we propose a new approach to denoising, inpainting,
and super-resolution of hyperspectral image data using intrinsic properties of
a CNN without any training. The performance of the given algorithm is shown to
be comparable to the performance of trained networks, while its application is
not restricted by the availability of training data. This work is an extension
of original "deep prior" algorithm to HSI domain and 3D-convolutional networks.Comment: Published in ICCV 2019 Workshop
Non-local Low-rank Cube-based Tensor Factorization for Spectral CT Reconstruction
Spectral computed tomography (CT) reconstructs material-dependent attenuation
images with the projections of multiple narrow energy windows, it is meaningful
for material identification and decomposition. Unfortunately, the multi-energy
projection dataset always contains strong complicated noise and result in the
projections has a lower signal-noise-ratio (SNR). Very recently, the
spatial-spectral cube matching frame (SSCMF) was proposed to explore the
non-local spatial-spectrum similarities for spectral CT. The method constructs
such a group by clustering up a series of non-local spatial-spectrum cubes. The
small size of spatial patch for such a group make SSCMF fails to encode the
sparsity and low-rank properties. In addition, the hard-thresholding and
collaboration filtering operation in the SSCMF are also rough to recover the
image features and spatial edges. While for all steps are operated on 4-D
group, we may not afford such huge computational and memory load in practical.
To avoid the above limitation and further improve image quality, we first
formulate a non-local cube-based tensor instead of the group to encode the
sparsity and low-rank properties. Then, as a new regularizer,
Kronecker-Basis-Representation (KBR) tensor factorization is employed into a
basic spectral CT reconstruction model to enhance the ability of extracting
image features and protecting spatial edges, generating the non-local low-rank
cube-based tensor factorization (NLCTF) method. Finally, the split-Bregman
strategy is adopted to solve the NLCTF model. Both numerical simulations and
realistic preclinical mouse studies are performed to validate and assess the
NLCTF algorithm. The results show that the NLCTF method outperforms the other
competitors
Transform Learning for Magnetic Resonance Image Reconstruction: From Model-based Learning to Building Neural Networks
Magnetic resonance imaging (MRI) is widely used in clinical practice, but it
has been traditionally limited by its slow data acquisition. Recent advances in
compressed sensing (CS) techniques for MRI reduce acquisition time while
maintaining high image quality. Whereas classical CS assumes the images are
sparse in known analytical dictionaries or transform domains, methods using
learned image models for reconstruction have become popular. The model could be
pre-learned from datasets, or learned simultaneously with the reconstruction,
i.e., blind CS (BCS). Besides the well-known synthesis dictionary model, recent
advances in transform learning (TL) provide an efficient alternative framework
for sparse modeling in MRI. TL-based methods enjoy numerous advantages
including exact sparse coding, transform update, and clustering solutions,
cheap computation, and convergence guarantees, and provide high-quality results
in MRI compared to popular competing methods. This paper provides a review of
some recent works in MRI reconstruction from limited data, with focus on the
recent TL-based methods. A unified framework for incorporating various TL-based
models is presented. We discuss the connections between transform learning and
convolutional or filter bank models and corresponding multi-layer extensions,
with connections to deep learning. Finally, we discuss recent trends in MRI,
open problems, and future directions for the field.Comment: Accepted to IEEE Signal Processing Magazine, Special Issue on
Computational MRI: Compressed Sensing and Beyon
Image Restoration Using Joint Statistical Modeling in Space-Transform Domain
This paper presents a novel strategy for high-fidelity image restoration by
characterizing both local smoothness and nonlocal self-similarity of natural
images in a unified statistical manner. The main contributions are three-folds.
First, from the perspective of image statistics, a joint statistical modeling
(JSM) in an adaptive hybrid space-transform domain is established, which offers
a powerful mechanism of combining local smoothness and nonlocal self-similarity
simultaneously to ensure a more reliable and robust estimation. Second, a new
form of minimization functional for solving image inverse problem is formulated
using JSM under regularization-based framework. Finally, in order to make JSM
tractable and robust, a new Split-Bregman based algorithm is developed to
efficiently solve the above severely underdetermined inverse problem associated
with theoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise
removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions
on Circuits System and Video Technology (TCSVT). High resolution pdf version
and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM
Convolutional Sparse Coding for Compressed Sensing CT Reconstruction
Over the past few years, dictionary learning (DL)-based methods have been
successfully used in various image reconstruction problems. However,
traditional DL-based computed tomography (CT) reconstruction methods are
patch-based and ignore the consistency of pixels in overlapped patches. In
addition, the features learned by these methods always contain shifted versions
of the same features. In recent years, convolutional sparse coding (CSC) has
been developed to address these problems. In this paper, inspired by several
successful applications of CSC in the field of signal processing, we explore
the potential of CSC in sparse-view CT reconstruction. By directly working on
the whole image, without the necessity of dividing the image into overlapped
patches in DL-based methods, the proposed methods can maintain more details and
avoid artifacts caused by patch aggregation. With predetermined filters, an
alternating scheme is developed to optimize the objective function. Extensive
experiments with simulated and real CT data were performed to validate the
effectiveness of the proposed methods. Qualitative and quantitative results
demonstrate that the proposed methods achieve better performance than several
existing state-of-the-art methods.Comment: Accepted by IEEE TM
Group Sparsity Residual Constraint for Image Denoising
Group-based sparse representation has shown great potential in image
denoising. However, most existing methods only consider the nonlocal
self-similarity (NSS) prior of noisy input image. That is, the similar patches
are collected only from degraded input, which makes the quality of image
denoising largely depend on the input itself. However, such methods often
suffer from a common drawback that the denoising performance may degrade
quickly with increasing noise levels. In this paper we propose a new prior
model, called group sparsity residual constraint (GSRC). Unlike the
conventional group-based sparse representation denoising methods, two kinds of
prior, namely, the NSS priors of noisy and pre-filtered images, are used in
GSRC. In particular, we integrate these two NSS priors through the mechanism of
sparsity residual, and thus, the task of image denoising is converted to the
problem of reducing the group sparsity residual. To this end, we first obtain a
good estimation of the group sparse coefficients of the original image by
pre-filtering, and then the group sparse coefficients of the noisy image are
used to approximate this estimation. To improve the accuracy of the nonlocal
similar patch selection, an adaptive patch search scheme is designed.
Furthermore, to fuse these two NSS prior better, an effective iterative
shrinkage algorithm is developed to solve the proposed GSRC model. Experimental
results demonstrate that the proposed GSRC modeling outperforms many
state-of-the-art denoising methods in terms of the objective and the perceptual
metrics
MAGIC: Manifold and Graph Integrative Convolutional Network for Low-Dose CT Reconstruction
Low-dose computed tomography (LDCT) scans, which can effectively alleviate
the radiation problem, will degrade the imaging quality. In this paper, we
propose a novel LDCT reconstruction network that unrolls the iterative scheme
and performs in both image and manifold spaces. Because patch manifolds of
medical images have low-dimensional structures, we can build graphs from the
manifolds. Then, we simultaneously leverage the spatial convolution to extract
the local pixel-level features from the images and incorporate the graph
convolution to analyze the nonlocal topological features in manifold space. The
experiments show that our proposed method outperforms both the quantitative and
qualitative aspects of state-of-the-art methods. In addition, aided by a
projection loss component, our proposed method also demonstrates superior
performance for semi-supervised learning. The network can remove most noise
while maintaining the details of only 10% (40 slices) of the training data
labeled.Comment: 17 pages, 17 figures. Submitted for possible publicatio
- …