7,719 research outputs found

    Efficient Simulation for Branching Linear Recursions

    Full text link
    We consider a linear recursion of the form R(k+1)=Dβˆ‘i=1NCiRi(k)+Q,R^{(k+1)}\stackrel{\mathcal D}{=}\sum_{i=1}^{N}C_iR^{(k)}_i+Q, where (Q,N,C1,C2,… )(Q,N,C_1,C_2,\dots) is a real-valued random vector with N∈N={0,1,2,… }N\in\mathbb{N}=\{0, 1, 2, \dots\}, {Ri(k)}i∈N\{R^{(k)}_i\}_{i\in\mathbb{N}} is a sequence of i.i.d. copies of R(k)R^{(k)}, independent of (Q,N,C1,C2,… )(Q,N,C_1,C_2,\dots), and =D\stackrel{\mathcal{D}}{=} denotes equality in distribution. For suitable vectors (Q,N,C1,C2,… )(Q,N,C_1,C_2,\dots) and provided the initial distribution of R(0)R^{(0)} is well-behaved, the process R(k)R^{(k)} is known to converge to the endogenous solution of the corresponding stochastic fixed-point equation, which appears in the analysis of information ranking algorithms, e.g., PageRank, and in the complexity analysis of divide and conquer algorithms, e.g. Quicksort. Naive Monte Carlo simulation of R(k)R^{(k)} based on the branching recursion has exponential complexity in kk, and therefore the need for efficient methods. We propose in this paper an iterative bootstrap algorithm that has linear complexity and can be used to approximately sample R(k)R^{(k)}. We show the consistency of estimators based on our proposed algorithm.Comment: submitted to WSC 201

    Distributed Low-rank Subspace Segmentation

    Full text link
    Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR's non-decomposable constraints and maintains LRR's strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semi-supervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain state-of-the-art results and order-of-magnitude speed ups
    • …
    corecore