14,860 research outputs found

    Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation

    Full text link
    Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin

    On the Power of Adaptivity in Sparse Recovery

    Get PDF
    The goal of (stable) sparse recovery is to recover a kk-sparse approximation xβˆ—x* of a vector xx from linear measurements of xx. Specifically, the goal is to recover xβˆ—x* such that ||x-x*||_p <= C min_{k-sparse x'} ||x-x'||_q for some constant CC and norm parameters pp and qq. It is known that, for p=q=1p=q=1 or p=q=2p=q=2, this task can be accomplished using m=O(klog⁑(n/k))m=O(k \log (n/k)) non-adaptive measurements [CRT06] and that this bound is tight [DIPW10,FPRU10,PW11]. In this paper we show that if one is allowed to perform measurements that are adaptive, then the number of measurements can be considerably reduced. Specifically, for C=1+epsC=1+eps and p=q=2p=q=2 we show - A scheme with m=O((1/eps)kloglog(neps/k))m=O((1/eps)k log log (n eps/k)) measurements that uses O(logβˆ—klog⁑log⁑(neps/k))O(log* k \log \log (n eps/k)) rounds. This is a significant improvement over the best possible non-adaptive bound. - A scheme with m=O((1/eps)klog(k/eps)+klog⁑(n/k))m=O((1/eps) k log (k/eps) + k \log (n/k)) measurements that uses /two/ rounds. This improves over the best possible non-adaptive bound. To the best of our knowledge, these are the first results of this type. As an independent application, we show how to solve the problem of finding a duplicate in a data stream of nn items drawn from 1,2,...,nβˆ’1{1, 2, ..., n-1} using O(logn)O(log n) bits of space and O(loglogn)O(log log n) passes, improving over the best possible space complexity achievable using a single pass.Comment: 18 pages; appearing at FOCS 201
    • …
    corecore