71,978 research outputs found

    Reducing “Structure from Motion”: a general framework for dynamic vision. 2. Implementation and experimental assessment

    Get PDF
    For pt.1 see ibid., p.933-42 (1998). A number of methods have been proposed in the literature for estimating scene-structure and ego-motion from a sequence of images using dynamical models. Despite the fact that all methods may be derived from a “natural” dynamical model within a unified framework, from an engineering perspective there are a number of trade-offs that lead to different strategies depending upon the applications and the goals one is targeting. We want to characterize and compare the properties of each model such that the engineer may choose the one best suited to the specific application. We analyze the properties of filters derived from each dynamical model under a variety of experimental conditions, assess the accuracy of the estimates, their robustness to measurement noise, sensitivity to initial conditions and visual angle, effects of the bas-relief ambiguity and occlusions, dependence upon the number of image measurements and their sampling rate

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 2: Experimental Evaluation

    Get PDF
    A number of methods have been proposed in the literature for estimating scene-structure and ego-motion from a sequence of images using dynamical models. Although all methods may be derived from a "natural" dynamical model within a unified framework, from an engineering perspective there are a number of trade-offs that lead to different strategies depending upon the specific applications and the goals one is targeting. Which one is the winning strategy? In this paper we analyze the properties of the dynamical models that originate from each strategy under a variety of experimental conditions. For each model we assess the accuracy of the estimates, their robustness to measurement noise, sensitivity to initial conditions and visual angle, effects of the bas-relief ambiguity and occlusions, dependence upon the number of image measurements and their sampling rate

    Approximate unitary tt-designs by short random quantum circuits using nearest-neighbor and long-range gates

    Full text link
    We prove that poly(t)â‹…n1/Dpoly(t) \cdot n^{1/D}-depth local random quantum circuits with two qudit nearest-neighbor gates on a DD-dimensional lattice with n qudits are approximate tt-designs in various measures. These include the "monomial" measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was poly(t)â‹…npoly(t)\cdot n due to Brandao-Harrow-Horodecki (BHH) for D=1D=1. We also improve the "scrambling" and "decoupling" bounds for spatially local random circuits due to Brown and Fawzi. One consequence of our result is that assuming the polynomial hierarchy (PH) is infinite and that certain counting problems are #P\#P-hard on average, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under the assumption that PH is infinite. However, to show the hardness of approximate sampling using this strategy requires that the quantum circuits have a property called "anti-concentration", meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Thus our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent proposal by the Google Quantum AI group to perform such a sampling task with 49 qubits on a two-dimensional lattice and confirms their conjecture that O(n)O(\sqrt n) depth suffices for anti-concentration. We also prove that anti-concentration is possible in depth O(log(n) loglog(n)) using a different model

    Sampling random graph homomorphisms and applications to network data analysis

    Full text link
    A graph homomorphism is a map between two graphs that preserves adjacency relations. We consider the problem of sampling a random graph homomorphism from a graph FF into a large network G\mathcal{G}. We propose two complementary MCMC algorithms for sampling a random graph homomorphisms and establish bounds on their mixing times and concentration of their time averages. Based on our sampling algorithms, we propose a novel framework for network data analysis that circumvents some of the drawbacks in methods based on independent and neigborhood sampling. Various time averages of the MCMC trajectory give us various computable observables, including well-known ones such as homomorphism density and average clustering coefficient and their generalizations. Furthermore, we show that these network observables are stable with respect to a suitably renormalized cut distance between networks. We provide various examples and simulations demonstrating our framework through synthetic networks. We also apply our framework for network clustering and classification problems using the Facebook100 dataset and Word Adjacency Networks of a set of classic novels.Comment: 51 pages, 33 figures, 2 table
    • …
    corecore