792 research outputs found
A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms
We propose a constructive algorithm that decomposes an arbitrary real tensor
into a finite sum of orthonormal rank-1 outer products. The algorithm, named
TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1)
series via the singular value decomposition (SVD). TTr1SVD naturally
generalizes the SVD to the tensor regime with properties such as uniqueness for
a fixed order of indices, orthogonal rank-1 outer product terms, and easy
truncation error quantification. Using an outer product column table it also
allows, for the first time, a complete characterization of all tensors
orthogonal with the original tensor. Incidentally, this leads to a strikingly
simple constructive proof showing that the maximum rank of a real tensor over the real field is 3. We also derive a conversion of the
TTr1 decomposition into a Tucker decomposition with a sparse core tensor.
Numerical examples illustrate each of the favorable properties of the TTr1
decomposition.Comment: Added subsection on orthogonal complement tensors. Added constructive
proof of maximal CP-rank of a 2x2x2 tensor. Added perturbation of singular
values result. Added conversion of the TTr1 decomposition to the Tucker
decomposition. Added example that demonstrates how the rank behaves when
subtracting rank-1 terms. Added example with exponential decaying singular
value
Algorithms and Adaptivity Gaps for Stochastic k-TSP
Given a metric and a , the classic
\textsf{k-TSP} problem is to find a tour originating at the
of minimum length that visits at least nodes in . In this work,
motivated by applications where the input to an optimization problem is
uncertain, we study two stochastic versions of \textsf{k-TSP}.
In Stoch-Reward -TSP, originally defined by Ene-Nagarajan-Saket [ENS17],
each vertex in the given metric contains a stochastic reward .
The goal is to adaptively find a tour of minimum expected length that collects
at least reward ; here "adaptively" means our next decision may depend on
previous outcomes. Ene et al. give an -approximation adaptive
algorithm for this problem, and left open if there is an -approximation
algorithm. We totally resolve their open question and even give an
-approximation \emph{non-adaptive} algorithm for this problem.
We also introduce and obtain similar results for the Stoch-Cost -TSP
problem. In this problem each vertex has a stochastic cost , and the
goal is to visit and select at least vertices to minimize the expected
\emph{sum} of tour length and cost of selected vertices. This problem
generalizes the Price of Information framework [Singla18] from deterministic
probing costs to metric probing costs.
Our techniques are based on two crucial ideas: "repetitions" and "critical
scaling". We show using Freedman's and Jogdeo-Samuels' inequalities that for
our problems, if we truncate the random variables at an ideal threshold and
repeat, then their expected values form a good surrogate. Unfortunately, this
ideal threshold is adaptive as it depends on how far we are from achieving our
target , so we truncate at various different scales and identify a
"critical" scale.Comment: ITCS 202
Time delay in the strong field limit for null and timelike signals and its simple interpretation
Gravitational lensing can happen not only for null signal but also timelike
signals such as neutrinos and massive gravitational waves in some theories
beyond GR. In this work we study the time delay between different relativistic
images formed by signals with arbitrary asymptotic velocity in general
static and spherically symmetric spacetimes. A perturbative method is used to
calculate the total travel time in the strong field limit, which is found to be
in quasi-series of the small parameter where is the impact
parameter and is its critical value. The coefficients of the series are
completely fixed by the behaviour of the metric functions near the particle
sphere and only the first term of the series contains a weak logarithmic
divergence. The time delay to the leading non-trivial order
was shown to equal the particle sphere circumference divided by the local
signal velocity and multiplied by the winding number and the redshift factor.
By assuming the Sgr A* supermassive black hole is a Hayward one, we were able
to validate the quasi-series form of the total time, and reveal the effects of
the spacetime parameter , the signal velocity and the source/detector
coordinate difference on the time delay. It is found that as
increase from 0 to its critical value , both and decrease. The variation of for from 0 to
can be as large as [s], whose measurement then can be used to
constrain the value of . While for ultra-relativistic neutrino or
gravitational wave, the variation of is too small to be
resolved. The dependence of on shows that
to temporally resolve the two sequences of images from opposite sides of the
lens, has to be larger than certain value.Comment: 24 pages, 3 figure
Object-oriented Neural Programming (OONP) for Document Understanding
We propose Object-oriented Neural Programming (OONP), a framework for
semantically parsing documents in specific domains. Basically, OONP reads a
document and parses it into a predesigned object-oriented data structure
(referred to as ontology in this paper) that reflects the domain-specific
semantics of the document. An OONP parser models semantic parsing as a decision
process: a neural net-based Reader sequentially goes through the document, and
during the process it builds and updates an intermediate ontology to summarize
its partial understanding of the text it covers. OONP supports a rich family of
operations (both symbolic and differentiable) for composing the ontology, and a
big variety of forms (both symbolic and differentiable) for representing the
state and the document. An OONP parser can be trained with supervision of
different forms and strength, including supervised learning (SL) ,
reinforcement learning (RL) and hybrid of the two. Our experiments on both
synthetic and real-world document parsing tasks have shown that OONP can learn
to handle fairly complicated ontology with training data of modest sizes.Comment: accepted by ACL 201
- β¦