5,170 research outputs found
Simultaneous use of Individual and Joint Regularization Terms in Compressive Sensing: Joint Reconstruction of Multi-Channel Multi-Contrast MRI Acquisitions
Purpose: A time-efficient strategy to acquire high-quality multi-contrast
images is to reconstruct undersampled data with joint regularization terms that
leverage common information across contrasts. However, these terms can cause
leakage of uncommon features among contrasts, compromising diagnostic utility.
The goal of this study is to develop a compressive sensing method for
multi-channel multi-contrast magnetic resonance imaging (MRI) that optimally
utilizes shared information while preventing feature leakage.
Theory: Joint regularization terms group sparsity and colour total variation
are used to exploit common features across images while individual sparsity and
total variation are also used to prevent leakage of distinct features across
contrasts. The multi-channel multi-contrast reconstruction problem is solved
via a fast algorithm based on Alternating Direction Method of Multipliers.
Methods: The proposed method is compared against using only individual and
only joint regularization terms in reconstruction. Comparisons were performed
on single-channel simulated and multi-channel in-vivo datasets in terms of
reconstruction quality and neuroradiologist reader scores.
Results: The proposed method demonstrates rapid convergence and improved
image quality for both simulated and in-vivo datasets. Furthermore, while
reconstructions that solely use joint regularization terms are prone to
leakage-of-features, the proposed method reliably avoids leakage via
simultaneous use of joint and individual terms.
Conclusion: The proposed compressive sensing method performs fast
reconstruction of multi-channel multi-contrast MRI data with improved image
quality. It offers reliability against feature leakage in joint
reconstructions, thereby holding great promise for clinical use.Comment: 13 pages, 13 figures. Submitted for possible publicatio
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
Multitask Diffusion Adaptation over Networks
Adaptive networks are suitable for decentralized inference tasks, e.g., to
monitor complex natural phenomena. Recent research works have intensively
studied distributed optimization problems in the case where the nodes have to
estimate a single optimum parameter vector collaboratively. However, there are
many important applications that are multitask-oriented in the sense that there
are multiple optimum parameter vectors to be inferred simultaneously, in a
collaborative manner, over the area covered by the network. In this paper, we
employ diffusion strategies to develop distributed algorithms that address
multitask problems by minimizing an appropriate mean-square error criterion
with -regularization. The stability and convergence of the algorithm in
the mean and in the mean-square sense is analyzed. Simulations are conducted to
verify the theoretical findings, and to illustrate how the distributed strategy
can be used in several useful applications related to spectral sensing, target
localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio
Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise Classification
There is no consensus on how to construct structural brain networks from
diffusion MRI. How variations in pre-processing steps affect network
reliability and its ability to distinguish subjects remains opaque. In this
work, we address this issue by comparing 35 structural connectome-building
pipelines. We vary diffusion reconstruction models, tractography algorithms and
parcellations. Next, we classify structural connectome pairs as either
belonging to the same individual or not. Connectome weights and eight
topological derivative measures form our feature set. For experiments, we use
three test-retest datasets from the Consortium for Reliability and
Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare
pairwise classification results to a commonly used parametric test-retest
measure, Intraclass Correlation Coefficient (ICC).Comment: Accepted for MICCAI 2017, 8 pages, 3 figure
Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis
Database theory and database practice are typically the domain of computer
scientists who adopt what may be termed an algorithmic perspective on their
data. This perspective is very different than the more statistical perspective
adopted by statisticians, scientific computers, machine learners, and other who
work on what may be broadly termed statistical data analysis. In this article,
I will address fundamental aspects of this algorithmic-statistical disconnect,
with an eye to bridging the gap between these two very different approaches. A
concept that lies at the heart of this disconnect is that of statistical
regularization, a notion that has to do with how robust is the output of an
algorithm to the noise properties of the input data. Although it is nearly
completely absent from computer science, which historically has taken the input
data as given and modeled algorithms discretely, regularization in one form or
another is central to nearly every application domain that applies algorithms
to noisy data. By using several case studies, I will illustrate, both
theoretically and empirically, the nonobvious fact that approximate
computation, in and of itself, can implicitly lead to statistical
regularization. This and other recent work suggests that, by exploiting in a
more principled way the statistical properties implicit in worst-case
algorithms, one can in many cases satisfy the bicriteria of having algorithms
that are scalable to very large-scale databases and that also have good
inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles
of Database Systems (PODS 2012
- …