792 research outputs found

    A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms

    Get PDF
    We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products. The algorithm, named TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1) series via the singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime with properties such as uniqueness for a fixed order of indices, orthogonal rank-1 outer product terms, and easy truncation error quantification. Using an outer product column table it also allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor. Incidentally, this leads to a strikingly simple constructive proof showing that the maximum rank of a real 2Γ—2Γ—22 \times 2 \times 2 tensor over the real field is 3. We also derive a conversion of the TTr1 decomposition into a Tucker decomposition with a sparse core tensor. Numerical examples illustrate each of the favorable properties of the TTr1 decomposition.Comment: Added subsection on orthogonal complement tensors. Added constructive proof of maximal CP-rank of a 2x2x2 tensor. Added perturbation of singular values result. Added conversion of the TTr1 decomposition to the Tucker decomposition. Added example that demonstrates how the rank behaves when subtracting rank-1 terms. Added example with exponential decaying singular value

    Algorithms and Adaptivity Gaps for Stochastic k-TSP

    Get PDF
    Given a metric (V,d)(V,d) and a root∈V\textsf{root} \in V, the classic \textsf{k-TSP} problem is to find a tour originating at the root\textsf{root} of minimum length that visits at least kk nodes in VV. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of \textsf{k-TSP}. In Stoch-Reward kk-TSP, originally defined by Ene-Nagarajan-Saket [ENS17], each vertex vv in the given metric (V,d)(V,d) contains a stochastic reward RvR_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward kk; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log⁑k)O(\log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)O(1)-approximation algorithm. We totally resolve their open question and even give an O(1)O(1)-approximation \emph{non-adaptive} algorithm for this problem. We also introduce and obtain similar results for the Stoch-Cost kk-TSP problem. In this problem each vertex vv has a stochastic cost CvC_v, and the goal is to visit and select at least kk vertices to minimize the expected \emph{sum} of tour length and cost of selected vertices. This problem generalizes the Price of Information framework [Singla18] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". We show using Freedman's and Jogdeo-Samuels' inequalities that for our problems, if we truncate the random variables at an ideal threshold and repeat, then their expected values form a good surrogate. Unfortunately, this ideal threshold is adaptive as it depends on how far we are from achieving our target kk, so we truncate at various different scales and identify a "critical" scale.Comment: ITCS 202

    Time delay in the strong field limit for null and timelike signals and its simple interpretation

    Full text link
    Gravitational lensing can happen not only for null signal but also timelike signals such as neutrinos and massive gravitational waves in some theories beyond GR. In this work we study the time delay between different relativistic images formed by signals with arbitrary asymptotic velocity vv in general static and spherically symmetric spacetimes. A perturbative method is used to calculate the total travel time in the strong field limit, which is found to be in quasi-series of the small parameter a=1βˆ’bc/ba=1-b_c/b where bb is the impact parameter and bcb_c is its critical value. The coefficients of the series are completely fixed by the behaviour of the metric functions near the particle sphere rcr_c and only the first term of the series contains a weak logarithmic divergence. The time delay Ξ”tn,m\Delta t_{n,m} to the leading non-trivial order was shown to equal the particle sphere circumference divided by the local signal velocity and multiplied by the winding number and the redshift factor. By assuming the Sgr A* supermassive black hole is a Hayward one, we were able to validate the quasi-series form of the total time, and reveal the effects of the spacetime parameter ll, the signal velocity vv and the source/detector coordinate difference Δϕsd\Delta\phi_{sd} on the time delay. It is found that as ll increase from 0 to its critical value lcl_c, both rcr_c and Ξ”tn,m\Delta t_{n,m} decrease. The variation of Ξ”tn+1,n\Delta t_{n+1,n} for ll from 0 to lcl_c can be as large as 7.2Γ—1017.2\times 10^1 [s], whose measurement then can be used to constrain the value of ll. While for ultra-relativistic neutrino or gravitational wave, the variation of Ξ”tn,m\Delta t_{n,m} is too small to be resolved. The dependence of Ξ”tn,βˆ’n\Delta t_{n,-n} on Δϕsd\Delta \phi_{sd} shows that to temporally resolve the two sequences of images from opposite sides of the lens, βˆ£Ξ”Ο•sdβˆ’Ο€βˆ£|\Delta \phi_{sd}-\pi| has to be larger than certain value.Comment: 24 pages, 3 figure

    Object-oriented Neural Programming (OONP) for Document Understanding

    Full text link
    We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure (referred to as ontology in this paper) that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural net-based Reader sequentially goes through the document, and during the process it builds and updates an intermediate ontology to summarize its partial understanding of the text it covers. OONP supports a rich family of operations (both symbolic and differentiable) for composing the ontology, and a big variety of forms (both symbolic and differentiable) for representing the state and the document. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.Comment: accepted by ACL 201
    • …
    corecore