70,369 research outputs found

    Filtering Random Graph Processes Over Random Time-Varying Graphs

    Get PDF
    Graph filters play a key role in processing the graph spectra of signals supported on the vertices of a graph. However, despite their widespread use, graph filters have been analyzed only in the deterministic setting, ignoring the impact of stochastic- ity in both the graph topology as well as the signal itself. To bridge this gap, we examine the statistical behavior of the two key filter types, finite impulse response (FIR) and autoregressive moving average (ARMA) graph filters, when operating on random time- varying graph signals (or random graph processes) over random time-varying graphs. Our analysis shows that (i) in expectation, the filters behave as the same deterministic filters operating on a deterministic graph, being the expected graph, having as input signal a deterministic signal, being the expected signal, and (ii) there are meaningful upper bounds for the variance of the filter output. We conclude the paper by proposing two novel ways of exploiting randomness to improve (joint graph-time) noise cancellation, as well as to reduce the computational complexity of graph filtering. As demonstrated by numerical results, these methods outperform the disjoint average and denoise algorithm, and yield a (up to) four times complexity redution, with very little difference from the optimal solution

    Distributed Adaptive Learning of Graph Signals

    Full text link
    The aim of this paper is to propose distributed strategies for adaptive learning of signals defined over graphs. Assuming the graph signal to be bandlimited, the method enables distributed reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of sampled observations taken from a subset of vertices. A detailed mean square analysis is carried out and illustrates the role played by the sampling strategy on the performance of the proposed method. Finally, some useful strategies for distributed selection of the sampling set are provided. Several numerical results validate our theoretical findings, and illustrate the performance of the proposed method for distributed adaptive learning of signals defined over graphs.Comment: To appear in IEEE Transactions on Signal Processing, 201

    Forecasting Time Series with VARMA Recursions on Graphs

    Full text link
    Graph-based techniques emerged as a choice to deal with the dimensionality issues in modeling multivariate time series. However, there is yet no complete understanding of how the underlying structure could be exploited to ease this task. This work provides contributions in this direction by considering the forecasting of a process evolving over a graph. We make use of the (approximate) time-vertex stationarity assumption, i.e., timevarying graph signals whose first and second order statistical moments are invariant over time and correlated to a known graph topology. The latter is combined with VAR and VARMA models to tackle the dimensionality issues present in predicting the temporal evolution of multivariate time series. We find out that by projecting the data to the graph spectral domain: (i) the multivariate model estimation reduces to that of fitting a number of uncorrelated univariate ARMA models and (ii) an optimal low-rank data representation can be exploited so as to further reduce the estimation costs. In the case that the multivariate process can be observed at a subset of nodes, the proposed models extend naturally to Kalman filtering on graphs allowing for optimal tracking. Numerical experiments with both synthetic and real data validate the proposed approach and highlight its benefits over state-of-the-art alternatives.Comment: submitted to the IEEE Transactions on Signal Processin

    Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies

    Full text link
    The goal of this paper is to propose novel strategies for adaptive learning of signals defined over graphs, which are observed over a (randomly time-varying) subset of vertices. We recast two classical adaptive algorithms in the graph signal processing framework, namely, the least mean squares (LMS) and the recursive least squares (RLS) adaptive estimation strategies. For both methods, a detailed mean-square analysis illustrates the effect of random sampling on the adaptive reconstruction capability and the steady-state performance. Then, several probabilistic sampling strategies are proposed to design the sampling probability at each node in the graph, with the aim of optimizing the tradeoff between steady-state performance, graph sampling rate, and convergence rate of the adaptive algorithms. Finally, a distributed RLS strategy is derived and is shown to be convergent to its centralized counterpart. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.Comment: Submitted to IEEE Transactions on Signal Processing, September 201
    • …
    corecore