64 research outputs found
Dynamic Tensor Product Regression
In this work, we initiate the study of \emph{Dynamic Tensor Product
Regression}. One has matrices and a label vector , and the goal is to solve the regression problem with the design matrix
being the tensor product of the matrices i.e.
. At each time step, one matrix receives a sparse change, and
the goal is to maintain a sketch of the tensor product so that the regression solution can be updated quickly.
Recomputing the solution from scratch for each round is very slow and so it is
important to develop algorithms which can quickly update the solution with the
new design matrix. Our main result is a dynamic tree data structure where any
update to a single matrix can be propagated quickly throughout the tree. We
show that our data structure can be used to solve dynamic versions of not only
Tensor Product Regression, but also Tensor Product Spline regression (which is
a generalization of ridge regression) and for maintaining Low Rank
Approximations for the tensor product.Comment: NeurIPS 202
Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models
In the context of kernel machines, polynomial and Fourier features are
commonly used to provide a nonlinear extension to linear models by mapping the
data to a higher-dimensional space. Unless one considers the dual formulation
of the learning problem, which renders exact large-scale learning unfeasible,
the exponential increase of model parameters in the dimensionality of the data
caused by their tensor-product structure prohibits to tackle high-dimensional
problems. One of the possible approaches to circumvent this exponential scaling
is to exploit the tensor structure present in the features by constraining the
model weights to be an underparametrized tensor network. In this paper we
quantize, i.e. further tensorize, polynomial and Fourier features. Based on
this feature quantization we propose to quantize the associated model weights,
yielding quantized models. We show that, for the same number of model
parameters, the resulting quantized models have a higher bound on the
VC-dimension as opposed to their non-quantized counterparts, at no additional
computational cost while learning from identical features. We verify
experimentally how this additional tensorization regularizes the learning
problem by prioritizing the most salient features in the data and how it
provides models with increased generalization capabilities. We finally
benchmark our approach on large regression task, achieving state-of-the-art
results on a laptop computer
Streaming Semidefinite Programs: Passes, Small Space and Fast Runtime
We study the problem of solving semidefinite programs (SDP) in the streaming
model. Specifically, constraint matrices and a target matrix , all of
size together with a vector are streamed to us
one-by-one. The goal is to find a matrix such
that is maximized, subject to
for all and . Previous algorithmic studies of SDP
primarily focus on \emph{time-efficiency}, and all of them require a
prohibitively large space in order to store \emph{all the
constraints}. Such space consumption is necessary for fast algorithms as it is
the size of the input. In this work, we design an interior point method (IPM)
that uses space, which is strictly sublinear in the
regime . Our algorithm takes passes, which
is standard for IPM. Moreover, when is much smaller than , our algorithm
also matches the time complexity of the state-of-the-art SDP solvers. To
achieve such a sublinear space bound, we design a novel sketching method that
enables one to compute a spectral approximation to the Hessian matrix in
space. To the best of our knowledge, this is the first method that
successfully applies sketching technique to improve SDP algorithm in terms of
space (also time)
- β¦