29,811 research outputs found

    Revisiting Random Points: Combinatorial Complexity and Algorithms

    Full text link
    Consider a set PP of nn points picked uniformly and independently from [0,1]d[0,1]^d for a constant dimension dd -- such a point set is extremely well behaved in many aspects. For example, for a fixed r[0,1]r \in [0,1], we prove a new concentration result on the number of pairs of points of PP at a distance at most rr -- we show that this number lies in an interval that contains only O(nlogn)O(n \log n) numbers. We also present simple linear time algorithms to construct the Delaunay triangulation, Euclidean MST, and the convex hull of the points of PP. The MST algorithm is an interesting divide-and-conquer algorithm which might be of independent interest. We also provide a new proof that the expected complexity of the Delaunay triangulation of PP is linear -- the new proof is simpler and more direct, and might be of independent interest. Finally, we present a simple O~(n4/3)\tilde{O}(n^{4/3}) time algorithm for the distance selection problem for d=2d=2

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption

    Online Updating of Statistical Inference in the Big Data Setting

    Full text link
    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.Comment: Submitted to Technometric

    The efficiencies of generating cluster states with weak non-linearities

    Full text link
    We propose a scalable approach to building cluster states of matter qubits using coherent states of light. Recent work on the subject relies on the use of single photonic qubits in the measurement process. These schemes can be made robust to detector loss, spontaneous emission and cavity mismatching but as a consequence the overhead costs grow rapidly, in particular when considering single photon loss. In contrast, our approach uses continuous variables and highly efficient homodyne measurements. We present a two-qubit scheme, with a simple bucket measurement system yielding an entangling operation with success probability 1/2. Then we extend this to a three-qubit interaction, increasing this probability to 3/4. We discuss the important issues of the overhead cost and the time scaling. This leads to a "no-measurement" approach to building cluster states, making use of geometric phases in phase space.Comment: 21 pages, to appear in special issue of New J. Phys. on "Measurement-Based Quantum Information Processing
    corecore