450 research outputs found

    A Distributed Tracking Algorithm for Reconstruction of Graph Signals

    Full text link
    The rapid development of signal processing on graphs provides a new perspective for processing large-scale data associated with irregular domains. In many practical applications, it is necessary to handle massive data sets through complex networks, in which most nodes have limited computing power. Designing efficient distributed algorithms is critical for this task. This paper focuses on the distributed reconstruction of a time-varying bandlimited graph signal based on observations sampled at a subset of selected nodes. A distributed least square reconstruction (DLSR) algorithm is proposed to recover the unknown signal iteratively, by allowing neighboring nodes to communicate with one another and make fast updates. DLSR uses a decay scheme to annihilate the out-of-band energy occurring in the reconstruction process, which is inevitably caused by the transmission delay in distributed systems. Proof of convergence and error bounds for DLSR are provided in this paper, suggesting that the algorithm is able to track time-varying graph signals and perfectly reconstruct time-invariant signals. The DLSR algorithm is numerically experimented with synthetic data and real-world sensor network data, which verifies its ability in tracking slowly time-varying graph signals.Comment: 30 pages, 9 figures, 2 tables, journal pape

    Estimation of Markov Chain via Rank-Constrained Likelihood

    Full text link
    This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the â„“2\ell_2 risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.Comment: Accepted at ICML 201

    Waring-Goldbach problem in short intervals

    Full text link
    Let k≥2k\geq2 and ss be positive integers. Let θ∈(0,1)\theta\in(0,1) be a real number. In this paper, we establish that if s>k(k+1)s>k(k+1) and θ>0.55\theta>0.55, then every sufficiently large natural number nn, subjects to certain congruence conditions, can be written as n=p1k+⋯+psk, n=p_1^k+\cdots+p_s^k, where pi(1≤i≤s)p_i(1\leq i\leq s) are primes in the interval ((ns)1k−nθk,(ns)1k+nθk]((\frac{n}{s})^{\frac{1}{k}}-n^{\frac{\theta}{k}},(\frac{n}{s})^{\frac{1}{k}}+n^{\frac{\theta}{k}}]. The second result of this paper is to show that if s>k(k+1)2s>\frac{k(k+1)}{2} and θ>0.55\theta>0.55, then almost all integers nn, subject to certain congruence conditions, have above representation.Comment: 18 page

    Accelerating Stochastic Composition Optimization

    Full text link
    Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments
    • …
    corecore