21,569 research outputs found

    A kepstrum approach to filtering, smoothing and prediction

    Get PDF
    The kepstrum (or complex cepstrum) method is revisited and applied to the problem of spectral factorization where the spectrum is directly estimated from observations. The solution to this problem in turn leads to a new approach to optimal filtering, smoothing and prediction using the Wiener theory. Unlike previous approaches to adaptive and self-tuning filtering, the technique, when implemented, does not require a priori information on the type or order of the signal generating model. And unlike other approaches - with the exception of spectral subtraction - no state-space or polynomial model is necessary. In this first paper results are restricted to stationary signal and additive white noise

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    The adaptive patched cubature filter and its implementation

    Full text link
    There are numerous contexts where one wishes to describe the state of a randomly evolving system. Effective solutions combine models that quantify the underlying uncertainty with available observational data to form scientifically reasonable estimates for the uncertainty in the system state. Stochastic differential equations are often used to mathematically model the underlying system. The Kusuoka-Lyons-Victoir (KLV) approach is a higher order particle method for approximating the weak solution of a stochastic differential equation that uses a weighted set of scenarios to approximate the evolving probability distribution to a high order of accuracy. The algorithm can be performed by integrating along a number of carefully selected bounded variation paths. The iterated application of the KLV method has a tendency for the number of particles to increase. This can be addressed and, together with local dynamic recombination, which simplifies the support of discrete measure without harming the accuracy of the approximation, the KLV method becomes eligible to solve the filtering problem in contexts where one desires to maintain an accurate description of the ever-evolving conditioned measure. In addition to the alternate application of the KLV method and recombination, we make use of the smooth nature of the likelihood function and high order accuracy of the approximations to lead some of the particles immediately to the next observation time and to build into the algorithm a form of automatic high order adaptive importance sampling.Comment: to appear in Communications in Mathematical Sciences. arXiv admin note: substantial text overlap with arXiv:1311.675

    Integrated adaptive filtering and design for control experiments of flexible structures

    Get PDF
    A novel method is presented of identifying a state space model and a state estimator for linear stochastic systems from input and output data. The method is primarily based on the relations between the state space model and the finite difference model for linear stochastic systems derived through projection filters. It is proven that least squares identification of a finite difference model converges to the model derived from the projection filters. System pulse response samples are computed from the coefficients of the finite difference model. In estimating the corresponding state estimator gain, a z-domain method is used. First the deterministic component of the output is subtracted out, and then the state estimator gain is obtained by whitening the remaining signal. Experimental example is used to illustrate the feasibility of the method

    Forecasting Time Series with VARMA Recursions on Graphs

    Full text link
    Graph-based techniques emerged as a choice to deal with the dimensionality issues in modeling multivariate time series. However, there is yet no complete understanding of how the underlying structure could be exploited to ease this task. This work provides contributions in this direction by considering the forecasting of a process evolving over a graph. We make use of the (approximate) time-vertex stationarity assumption, i.e., timevarying graph signals whose first and second order statistical moments are invariant over time and correlated to a known graph topology. The latter is combined with VAR and VARMA models to tackle the dimensionality issues present in predicting the temporal evolution of multivariate time series. We find out that by projecting the data to the graph spectral domain: (i) the multivariate model estimation reduces to that of fitting a number of uncorrelated univariate ARMA models and (ii) an optimal low-rank data representation can be exploited so as to further reduce the estimation costs. In the case that the multivariate process can be observed at a subset of nodes, the proposed models extend naturally to Kalman filtering on graphs allowing for optimal tracking. Numerical experiments with both synthetic and real data validate the proposed approach and highlight its benefits over state-of-the-art alternatives.Comment: submitted to the IEEE Transactions on Signal Processin

    A probabilistic interpretation of set-membership filtering: application to polynomial systems through polytopic bounding

    Get PDF
    Set-membership estimation is usually formulated in the context of set-valued calculus and no probabilistic calculations are necessary. In this paper, we show that set-membership estimation can be equivalently formulated in the probabilistic setting by employing sets of probability measures. Inference in set-membership estimation is thus carried out by computing expectations with respect to the updated set of probability measures P as in the probabilistic case. In particular, it is shown that inference can be performed by solving a particular semi-infinite linear programming problem, which is a special case of the truncated moment problem in which only the zero-th order moment is known (i.e., the support). By writing the dual of the above semi-infinite linear programming problem, it is shown that, if the nonlinearities in the measurement and process equations are polynomial and if the bounding sets for initial state, process and measurement noises are described by polynomial inequalities, then an approximation of this semi-infinite linear programming problem can efficiently be obtained by using the theory of sum-of-squares polynomial optimization. We then derive a smart greedy procedure to compute a polytopic outer-approximation of the true membership-set, by computing the minimum-volume polytope that outer-bounds the set that includes all the means computed with respect to P
    • …
    corecore