137,443 research outputs found
Distributed and parallel sparse convex optimization for radio interferometry with PURIFY
Next generation radio interferometric telescopes are entering an era of big
data with extremely large data sets. While these telescopes can observe the sky
in higher sensitivity and resolution than before, computational challenges in
image reconstruction need to be overcome to realize the potential of
forthcoming telescopes. New methods in sparse image reconstruction and convex
optimization techniques (cf. compressive sensing) have shown to produce higher
fidelity reconstructions of simulations and real observations than traditional
methods. This article presents distributed and parallel algorithms and
implementations to perform sparse image reconstruction, with significant
practical considerations that are important for implementing these algorithms
for Big Data. We benchmark the algorithms presented, showing that they are
considerably faster than their serial equivalents. We then pre-sample gridding
kernels to scale the distributed algorithms to larger data sizes, showing
application times for 1 Gb to 2.4 Tb data sets over 25 to 100 nodes for up to
50 billion visibilities, and find that the run-times for the distributed
algorithms range from 100 milliseconds to 3 minutes per iteration. This work
presents an important step in working towards computationally scalable and
efficient algorithms and implementations that are needed to image observations
of both extended and compact sources from next generation radio interferometers
such as the SKA. The algorithms are implemented in the latest versions of the
SOPT (https://github.com/astro-informatics/sopt) and PURIFY
(https://github.com/astro-informatics/purify) software packages {(Versions
3.1.0)}, which have been released alongside of this article.Comment: 25 pages, 5 figure
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
Sparse permutation invariant covariance estimation
The paper proposes a method for constructing a sparse estimator for the
inverse covariance (concentration) matrix in high-dimensional settings. The
estimator uses a penalized normal likelihood approach and forces sparsity by
using a lasso-type penalty. We establish a rate of convergence in the Frobenius
norm as both data dimension and sample size are allowed to grow, and
show that the rate depends explicitly on how sparse the true concentration
matrix is. We also show that a correlation-based version of the method exhibits
better rates in the operator norm. We also derive a fast iterative algorithm
for computing the estimator, which relies on the popular Cholesky decomposition
of the inverse but produces a permutation-invariant estimator. The method is
compared to other estimators on simulated data and on a real data example of
tumor tissue classification using gene expression data.Comment: Published in at http://dx.doi.org/10.1214/08-EJS176 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …