5,730 research outputs found

    Stable Nonlinear Identification From Noisy Repeated Experiments via Convex Optimization

    Get PDF
    This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repeated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example

    Nonlinear system modeling based on constrained Volterra series estimates

    Full text link
    A simple nonlinear system modeling algorithm designed to work with limited \emph{a priori }knowledge and short data records, is examined. It creates an empirical Volterra series-based model of a system using an lql_{q}-constrained least squares algorithm with q1q\geq 1. If the system m()m\left( \cdot \right) is a continuous and bounded map with a finite memory no longer than some known τ\tau, then (for a DD parameter model and for a number of measurements NN) the difference between the resulting model of the system and the best possible theoretical one is guaranteed to be of order N1lnD\sqrt{N^{-1}\ln D}, even for DND\geq N. The performance of models obtained for q=1,1.5q=1,1.5 and 22 is tested on the Wiener-Hammerstein benchmark system. The results suggest that the models obtained for q>1q>1 are better suited to characterize the nature of the system, while the sparse solutions obtained for q=1q=1 yield smaller error values in terms of input-output behavior

    Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection

    Full text link
    A recursive algorithm named Zero-point Attracting Projection (ZAP) is proposed recently for sparse signal reconstruction. Compared with the reference algorithms, ZAP demonstrates rather good performance in recovery precision and robustness. However, any theoretical analysis about the mentioned algorithm, even a proof on its convergence, is not available. In this work, a strict proof on the convergence of ZAP is provided and the condition of convergence is put forward. Based on the theoretical analysis, it is further proved that ZAP is non-biased and can approach the sparse solution to any extent, with the proper choice of step-size. Furthermore, the case of inaccurate measurements in noisy scenario is also discussed. It is proved that disturbance power linearly reduces the recovery precision, which is predictable but not preventable. The reconstruction deviation of pp-compressible signal is also provided. Finally, numerical simulations are performed to verify the theoretical analysis.Comment: 29 pages, 6 figure

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore