19 research outputs found
REKONSTRUKCJA NIEKOMPLETNYCH OBRAZÓW ZA POMOCĄ METOD APROKSYMACJI MODELAMI NISKIEGO RZĘDU
The paper is concerned with the task of reconstructing missing pixels in images perturbed with impulse noise in a transmission channel. Such a task can be formulated in the context of image interpolation on an irregular grid or by approximating an incomplete image by low-rank factor decomposition models. We compared four algorithms that are based on the low-rank decomposition model: SVT, SmNMF-MC , FCSA-TC and SPC-QV. The numerical experiments are carried out for various cases of incomplete images, obtained by removing random pixels or regular grid lines from test images. The best performance is obtained if nonnegativity and smoothing constraints are imposed onto the estimated low-rank factors.W pracy badano zadanie rekonstrukcji brakujących pikseli w obrazach poddanych losowym zaburzeniom impulsowym w kanale transmisyjnym. Takie zadanie może być sformułowane w kontekście interpolacji obrazu na nieregularnej siatce lub aproksymacji niekompletnego obrazu za pomocą modeli dekompozycji obrazu na faktory niskiego rzędu. Porównano skuteczność czterech algorytmów opartych na dekompozycjach macierzy lub tensorów: SVT, SmNMF-MC, FCSA-TC i SPC-QV. Badania przeprowadzono na obrazach niekompletnych, otrzymanych z obrazów oryginalnych przez usunięcie losowo wybranych pikseli lub linii tworzących regularną siatkę. Najwyższą efektywność rekonstrukcji obrazu uzyskano gdy na estymowane faktory niskiego rzędu narzucano ograniczenia nieujemności i gładkości w postaci wagowej filtracji uśredniającej
Exploring Numerical Priors for Low-Rank Tensor Completion with Generalized CP Decomposition
Tensor completion is important to many areas such as computer vision, data
analysis, and signal processing. Enforcing low-rank structures on completed
tensors, a category of methods known as low-rank tensor completion has recently
been studied extensively. While such methods attained great success, none
considered exploiting numerical priors of tensor elements. Ignoring numerical
priors causes loss of important information regarding the data, and therefore
prevents the algorithms from reaching optimal accuracy. This work attempts to
construct a new methodological framework called GCDTC (Generalized CP
Decomposition Tensor Completion) for leveraging numerical priors and achieving
higher accuracy in tensor completion. In this newly introduced framework, a
generalized form of CP Decomposition is applied to low-rank tensor completion.
This paper also proposes an algorithm known as SPTC (Smooth Poisson Tensor
Completion) for nonnegative integer tensor completion as an instantiation of
the GCDTC framework. A series of experiments on real-world data indicated that
SPTC could produce results superior in completion accuracy to current
state-of-the-arts.Comment: 11 pages, 4 figures, 3 pseudocode algorithms, and 1 tabl
Tensor Completion for Weakly-dependent Data on Graph for Metro Passenger Flow Prediction
Low-rank tensor decomposition and completion have attracted significant
interest from academia given the ubiquity of tensor data. However, the low-rank
structure is a global property, which will not be fulfilled when the data
presents complex and weak dependencies given specific graph structures. One
particular application that motivates this study is the spatiotemporal data
analysis. As shown in the preliminary study, weakly dependencies can worsen the
low-rank tensor completion performance. In this paper, we propose a novel
low-rank CANDECOMP / PARAFAC (CP) tensor decomposition and completion framework
by introducing the -norm penalty and Graph Laplacian penalty to model
the weakly dependency on graph. We further propose an efficient optimization
algorithm based on the Block Coordinate Descent for efficient estimation. A
case study based on the metro passenger flow data in Hong Kong is conducted to
demonstrate improved performance over the regular tensor completion methods.Comment: Accepted at AAAI 202
Nonlinear System Identification via Tensor Completion
Function approximation from input and output data pairs constitutes a
fundamental problem in supervised learning. Deep neural networks are currently
the most popular method for learning to mimic the input-output relationship of
a general nonlinear system, as they have proven to be very effective in
approximating complex highly nonlinear functions. In this work, we show that
identifying a general nonlinear function from
input-output examples can be formulated as a tensor completion problem and
under certain conditions provably correct nonlinear system identification is
possible. Specifically, we model the interactions between the input
variables and the scalar output of a system by a single -way tensor, and
setup a weighted low-rank tensor completion problem with smoothness
regularization which we tackle using a block coordinate descent algorithm. We
extend our method to the multi-output setting and the case of partially
observed data, which cannot be readily handled by neural networks. Finally, we
demonstrate the effectiveness of the approach using several regression tasks
including some standard benchmarks and a challenging student grade prediction
task.Comment: AAAI 202
SVDinsTN: An Integrated Method for Tensor Network Representation with Efficient Structure Search
Tensor network (TN) representation is a powerful technique for data analysis
and machine learning. It practically involves a challenging TN structure search
(TN-SS) problem, which aims to search for the optimal structure to achieve a
compact representation. Existing TN-SS methods mainly adopt a bi-level
optimization method that leads to excessive computational costs due to repeated
structure evaluations. To address this issue, we propose an efficient
integrated (single-level) method named SVD-inspired TN decomposition
(SVDinsTN), eliminating the need for repeated tedious structure evaluation. By
inserting a diagonal factor for each edge of the fully-connected TN, we
calculate TN cores and diagonal factors simultaneously, with factor sparsity
revealing the most compact TN structure. Experimental results on real-world
data demonstrate that SVDinsTN achieves approximately times
acceleration in runtime compared to the existing TN-SS methods while
maintaining a comparable level of representation ability