381,104 research outputs found

    Space complexity in on-line computation

    Get PDF
    AbstractA technique is developed for determining space complexity in on-line computation. It is shown that each of the following functions requires linear space: (i) the conversion of binary numbers into ternary numbers, (ii) the multiplication of integers and (iii) the translation of arithmetic expressions in infix notation into Polish notation

    Fast Back-Projection for Non-Line of Sight Reconstruction

    Get PDF
    Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal frommultiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic dat

    Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference

    Full text link
    We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.Comment: See http://www.jair.org/ for any accompanying file

    Improved Deterministic Connectivity in Massively Parallel Computation

    Get PDF
    A long line of research about connectivity in the Massively Parallel Computation model has culminated in the seminal works of Andoni et al. [FOCS\u2718] and Behnezhad et al. [FOCS\u2719]. They provide a randomized algorithm for low-space MPC with conjectured to be optimal round complexity O(log D + log log_{m/n} n) and O(m) space, for graphs on n vertices with m edges and diameter D. Surprisingly, a recent result of Coy and Czumaj [STOC\u2722] shows how to achieve the same deterministically. Unfortunately, however, their algorithm suffers from large local computation time. We present a deterministic connectivity algorithm that matches all the parameters of the randomized algorithm and, in addition, significantly reduces the local computation time to nearly linear. Our derandomization method is based on reducing the amount of randomness needed to allow for a simpler efficient search. While similar randomness reduction approaches have been used before, our result is not only strikingly simpler, but it is the first to have efficient local computation. This is why we believe it to serve as a starting point for the systematic development of computation-efficient derandomization approaches in low-memory MPC

    Unary Pushdown Automata and Straight-Line Programs

    Full text link
    We consider decision problems for deterministic pushdown automata over a unary alphabet (udpda, for short). Udpda are a simple computation model that accept exactly the unary regular languages, but can be exponentially more succinct than finite-state automata. We complete the complexity landscape for udpda by showing that emptiness (and thus universality) is P-hard, equivalence and compressed membership problems are P-complete, and inclusion is coNP-complete. Our upper bounds are based on a translation theorem between udpda and straight-line programs over the binary alphabet (SLPs). We show that the characteristic sequence of any udpda can be represented as a pair of SLPs---one for the prefix, one for the lasso---that have size linear in the size of the udpda and can be computed in polynomial time. Hence, decision problems on udpda are reduced to decision problems on SLPs. Conversely, any SLP can be converted in logarithmic space into a udpda, and this forms the basis for our lower bound proofs. We show coNP-hardness of the ordered matching problem for SLPs, from which we derive coNP-hardness for inclusion. In addition, we complete the complexity landscape for unary nondeterministic pushdown automata by showing that the universality problem is Π2P\Pi_2 \mathrm P-hard, using a new class of integer expressions. Our techniques have applications beyond udpda. We show that our results imply Π2P\Pi_2 \mathrm P-completeness for a natural fragment of Presburger arithmetic and coNP lower bounds for compressed matching problems with one-character wildcards

    New Frameworks for Offline and Streaming Coreset Constructions

    Full text link
    A coreset for a set of points is a small subset of weighted points that approximately preserves important properties of the original set. Specifically, if PP is a set of points, QQ is a set of queries, and f:P×QRf:P\times Q\to\mathbb{R} is a cost function, then a set SPS\subseteq P with weights w:P[0,)w:P\to[0,\infty) is an ϵ\epsilon-coreset for some parameter ϵ>0\epsilon>0 if sSw(s)f(s,q)\sum_{s\in S}w(s)f(s,q) is a (1+ϵ)(1+\epsilon) multiplicative approximation to pPf(p,q)\sum_{p\in P}f(p,q) for all qQq\in Q. Coresets are used to solve fundamental problems in machine learning under various big data models of computation. Many of the suggested coresets in the recent decade used, or could have used a general framework for constructing coresets whose size depends quadratically on what is known as total sensitivity tt. In this paper we improve this bound from O(t2)O(t^2) to O(tlogt)O(t\log t). Thus our results imply more space efficient solutions to a number of problems, including projective clustering, kk-line clustering, and subspace approximation. Moreover, we generalize the notion of sensitivity sampling for sup-sampling that supports non-multiplicative approximations, negative cost functions and more. The main technical result is a generic reduction to the sample complexity of learning a class of functions with bounded VC dimension. We show that obtaining an (ν,α)(\nu,\alpha)-sample for this class of functions with appropriate parameters ν\nu and α\alpha suffices to achieve space efficient ϵ\epsilon-coresets. Our result implies more efficient coreset constructions for a number of interesting problems in machine learning; we show applications to kk-median/kk-means, kk-line clustering, jj-subspace approximation, and the integer (j,k)(j,k)-projective clustering problem

    Regular expression constrained sequence alignment revisited

    Get PDF
    International audienceImposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, the Regular Expression Constrained Sequence Alignment Problem was introduced, which proposed an O(n^2t^4) time and O(n^2t^2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the input non-deterministic automaton. A faster O(n^2t^3) time algorithm for the same problem was subsequently proposed. In this article, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n^2t^3/log t). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense

    A Low Complexity Discrete Radiosity Method

    Get PDF
    International audienceRather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largely dominant in practice. Hence, the apparent complexity is linear for time and space, with respect to the number of voxels in the scene. This method does not require the storage of pre-computed form factors, since they are computed on the fly in an efficient way. The algorithm which is described does not use 3D discrete line traversal and is not similar to simple ray-tracing. In the present form, the voxel-based radiosity equation assumes the ideal diffuse case and uses solid angles similarly to the hemicube
    corecore