13,533 research outputs found
Concatenated image completion via tensor augmentation and completion
This paper proposes a novel framework called concatenated image completion
via tensor augmentation and completion (ICTAC), which recovers missing entries
of color images with high accuracy. Typical images are second- or third-order
tensors (2D/3D) depending if they are grayscale or color, hence tensor
completion algorithms are ideal for their recovery. The proposed framework
performs image completion by concatenating copies of a single image that has
missing entries into a third-order tensor, applying a dimensionality
augmentation technique to the tensor, utilizing a tensor completion algorithm
for recovering its missing entries, and finally extracting the recovered image
from the tensor. The solution relies on two key components that have been
recently proposed to take advantage of the tensor train (TT) rank: A tensor
augmentation tool called ket augmentation (KA) that represents a low-order
tensor by a higher-order tensor, and the algorithm tensor completion by
parallel matrix factorization via tensor train (TMac-TT), which has been
demonstrated to outperform state-of-the-art tensor completion algorithms.
Simulation results for color image recovery show the clear advantage of our
framework against current state-of-the-art tensor completion algorithms.Comment: 7 pages, 6 figures, submitted to ICSPCS 201
Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm
In this paper, we investigate tensor recovery problems within the tensor
singular value decomposition (t-SVD) framework. We propose the partial sum of
the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the
tensor tubal multi-rank. We build two PSTNN-based minimization models for two
typical tensor recovery problems, i.e., the tensor completion and the tensor
principal component analysis. We give two algorithms based on the alternating
direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor
recovery models. Experimental results on the synthetic data and real-world data
reveal the superior of the proposed PSTNN
Tensor completion using enhanced multiple modes low-rank prior and total variation
In this paper, we propose a novel model to recover a low-rank tensor by
simultaneously performing double nuclear norm regularized low-rank matrix
factorizations to the all-mode matricizations of the underlying tensor. An
block successive upper-bound minimization algorithm is applied to solve the
model. Subsequence convergence of our algorithm can be established, and our
algorithm converges to the coordinate-wise minimizers in some mild conditions.
Several experiments on three types of public data sets show that our algorithm
can recover a variety of low-rank tensors from significantly fewer samples than
the other testing tensor completion methods
An Iterative Reweighted Method for Tucker Decomposition of Incomplete Multiway Tensors
We consider the problem of low-rank decomposition of incomplete multiway
tensors. Since many real-world data lie on an intrinsically low dimensional
subspace, tensor low-rank decomposition with missing entries has applications
in many data analysis problems such as recommender systems and image
inpainting. In this paper, we focus on Tucker decomposition which represents an
Nth-order tensor in terms of N factor matrices and a core tensor via
multilinear operations. To exploit the underlying multilinear low-rank
structure in high-dimensional datasets, we propose a group-based log-sum
penalty functional to place structural sparsity over the core tensor, which
leads to a compact representation with smallest core tensor. The method for
Tucker decomposition is developed by iteratively minimizing a surrogate
function that majorizes the original objective function, which results in an
iterative reweighted process. In addition, to reduce the computational
complexity, an over-relaxed monotone fast iterative shrinkage-thresholding
technique is adapted and embedded in the iterative reweighted process. The
proposed method is able to determine the model complexity (i.e. multilinear
rank) in an automatic way. Simulation results show that the proposed algorithm
offers competitive performance compared with other existing algorithms
Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space
Let us consider a case where all of the elements in some continuous slices
are missing in tensor data.
In this case, the nuclear-norm and total variation regularization methods
usually fail to recover the missing elements.
The key problem is capturing some delay/shift-invariant structure.
In this study, we consider a low-rank model in an embedded space of a tensor.
For this purpose, we extend a delay embedding for a time series to a
"multi-way delay-embedding transform" for a tensor, which takes a given
incomplete tensor as the input and outputs a higher-order incomplete Hankel
tensor.
The higher-order tensor is then recovered by Tucker-based low-rank tensor
factorization.
Finally, an estimated tensor can be obtained by using the inverse multi-way
delay embedding transform of the recovered higher-order tensor.
Our experiments showed that the proposed method successfully recovered
missing slices for some color images and functional magnetic resonance images.Comment: accepted for CVPR201
Enhanced nonconvex low-rank approximation of tensor multi-modes for tensor completion
Higher-order low-rank tensor arises in many data processing applications and
has attracted great interests. Inspired by low-rank approximation theory,
researchers have proposed a series of effective tensor completion methods.
However, most of these methods directly consider the global low-rankness of
underlying tensors, which is not sufficient for a low sampling rate; in
addition, the single nuclear norm or its relaxation is usually adopted to
approximate the rank function, which would lead to suboptimal solution deviated
from the original one. To alleviate the above problems, in this paper, we
propose a novel low-rank approximation of tensor multi-modes (LRATM), in which
a double nonconvex norm is designed to represent the underlying
joint-manifold drawn from the modal factorization factors of the underlying
tensor. A block successive upper-bound minimization method-based algorithm is
designed to efficiently solve the proposed model, and it can be demonstrated
that our numerical scheme converges to the coordinatewise minimizers. Numerical
results on three types of public multi-dimensional datasets have tested and
shown that our algorithm can recover a variety of low-rank tensors with
significantly fewer samples than the compared methods.Comment: arXiv admin note: substantial text overlap with arXiv:2004.0874
Convolutional Imputation of Matrix Networks
A matrix network is a family of matrices, with relatedness modeled by a
weighted graph. We consider the task of completing a partially observed matrix
network. We assume a novel sampling scheme where a fraction of matrices might
be completely unobserved. How can we recover the entire matrix network from
incomplete observations? This mathematical problem arises in many applications
including medical imaging and social networks.
To recover the matrix network, we propose a structural assumption that the
matrices have a graph Fourier transform which is low-rank. We formulate a
convex optimization problem and prove an exact recovery guarantee for the
optimization problem. Furthermore, we numerically characterize the exact
recovery regime for varying rank and sampling rate and discover a new phase
transition phenomenon. Then we give an iterative imputation algorithm to
efficiently solve the optimization problem and complete large scale matrix
networks. We demonstrate the algorithm with a variety of applications such as
MRI and Facebook user network.Comment: Accepted by ICML 201
Learning from Binary Multiway Data: Probabilistic Tensor Decomposition and its Statistical Optimality
We consider the problem of decomposing a higher-order tensor with binary
entries. Such data problems arise frequently in applications such as
neuroimaging, recommendation system, topic modeling, and sensor network
localization. We propose a multilinear Bernoulli model, develop a
rank-constrained likelihood-based estimation method, and obtain the theoretical
accuracy guarantees. In contrast to continuous-valued problems, the binary
tensor problem exhibits an interesting phase transition phenomenon according to
the signal-to-noise ratio. The error bound for the parameter tensor estimation
is established, and we show that the obtained rate is minimax optimal under the
considered model. Furthermore, we develop an alternating optimization algorithm
with convergence guarantees. The efficacy of our approach is demonstrated
through both simulations and analyses of multiple data sets on the tasks of
tensor completion and clustering.Comment: 35 pages, 7 figures, 4 table
Robust Low-Rank Tensor Ring Completion
Low-rank tensor completion recovers missing entries based on different tensor
decompositions. Due to its outstanding performance in exploiting some
higher-order data structure, low rank tensor ring has been applied in tensor
completion. To further deal with its sensitivity to sparse component as it does
in tensor principle component analysis, we propose robust tensor ring
completion (RTRC), which separates latent low-rank tensor component from sparse
component with limited number of measurements. The low rank tensor component is
constrained by the weighted sum of nuclear norms of its balanced unfoldings,
while the sparse component is regularized by its l1 norm. We analyze the RTRC
model and gives the exact recovery guarantee. The alternating direction method
of multipliers is used to divide the problem into several sub-problems with
fast solutions. In numerical experiments, we verify the recovery condition of
the proposed method on synthetic data, and show the proposed method outperforms
the state-of-the-art ones in terms of both accuracy and computational
complexity in a number of real-world data based tasks, i.e., light-field image
recovery, shadow removal in face images, and background extraction in color
video
Measuring the Effects of Scalar and Spherical Colormaps on Ensembles of DMRI Tubes
We report empirical study results on the color encoding of ensemble scalar
and orientation to visualize diffusion magnetic resonance imaging (DMRI) tubes.
The experiment tested six scalar colormaps for average fractional anisotropy
(FA) tasks (grayscale, blackbody, diverging, isoluminant-rainbow,
extended-blackbody, and coolwarm) and four three-dimensional (3D) directional
encodings for tract tracing tasks (uniform gray, absolute, eigenmap, and Boy's
surface embedding). We found that extended-blackbody, coolwarm, and blackbody
remain the best three approaches for identifying ensemble average in 3D.
Isoluminant-rainbow coloring led to the same ensemble mean accuracy as other
colormaps. However, more than 50% of the answers consistently had higher
estimates of the ensemble average, independent of the mean values. Hue, not
luminance, influences ensemble estimates of mean values. For ensemble
orientation-tracing tasks, we found that the Boy's surface embedding (greatest
spatial resolution and contrast) and absolute color (lowest spatial resolution
and contrast) schemes led to more accurate answers than the eigenmaps scheme
(medium resolution and contrast), acting as the uncanny-valley phenomenon of
visualization design in terms of accuracy
- …