46 research outputs found

    Compressive system identification of LTI and LTV ARX models: The limited data set case

    Get PDF
    In this paper, we consider identifying Auto Regressive with eXternal input (ARX) models for both Linear Time-Invariant (LTI) and Linear Time-Variant (LTV) systems. We aim at doing the identification from the smallest possible number of observations. This is inspired by the field of Compressive Sensing (CS), and for this reason, we call this problem Compressive System Identification (CSI). In the case of LTI ARX systems, a system with a large number of inputs and unknown input delays on each channel can require a model structure with a large number of parameters, unless input delay estimation is performed. Since the complexity of input delay estimation increases exponentially in the number of inputs, this can be difficult for high dimensional systems. We show that in cases where the LTI system has possibly many inputs with different unknown delays, simultaneous ARX identification and input delay estimation is possible from few observations, even though this leaves an apparently ill-conditioned identification problem. We discuss identification guarantees and support our proposed method with simulations. We also consider identifying LTV ARX models. In particular, we consider systems with parameters that change only at a few time instants in a piecewise-constant manner where neither the change moments nor the number of changes is known a priori. The main technical novelty of our approach is in casting the identification problem as recovery of a block-sparse signal from an underdetermined set of linear equations. We suggest a random sampling approach for LTV identification, address the issue of identifiability and again support our approach with illustrative simulations

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0∈Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that ℓ1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds m≥Const⋅μ2(U)⋅S⋅log⁡n, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=n⋅max⁡k,j∣Uk,j∣\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Compressive imaging spectrometers using coded apertures

    Get PDF
    Abstract: We describe a novel method to track targets in a large field of view. This method simultaneously images multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target's true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. Potential encoding schemes include spatial shift, rotation, and magnification. We discuss each of these encoding schemes, but the main emphasis of the paper and all examples are based on one-dimensional spatial shift encoding. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. Finally, we include simulation and experimental results demonstrating our novel tracking method

    Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems

    Full text link
    This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto a state-space manifold having reduced dimensionality and possessing a Kahler potential of multi-linear form. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low-dimensionality Kahler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given, and methods for quantum state optimization by Dantzig selection are given.Comment: 104 pages, 13 figures, 2 table

    Multiscale Geometric Image Processing

    No full text
    Since their introduction a little more than 10 years ago, wavelets have revolutionized image processing. Wavelet based algorithms define the state-of-the-art for applications including image coding (JPEG2000), restoration, and segmentation. Despite their success, wavelets have significant shortcomings in their treatment of edges. Wavelets do not parsimoniously capture even the simplest geometrical structure in images, and wavelet based processing algorithms often produce images with ringing around the edges

    Wavelet-domain Approximation and Compression of Piecewise Smooth Images

    Get PDF
    The wavelet transform provides a sparse representation for smooth images, enabling efficient approximation and compression using techniques such as zerotrees. Unfortunately, this sparsity does not extend to piecewise smooth images, where edge discontinuities separating smooth regions persist along smooth contours. This lack of sparsity hampers the efficiency of wavelet-based approximation and compression. On the class of images containing smooth C 2 regions separated by edges along smooth C 2 contours, for example, the asymptotic rate-distortion (R-D) performance of zerotree-based wavelet coding is limited to D(R) 1/R, well below the optimal rate of 1/R 2. In this paper, we develop a geometric modeling framework for wavelets that addresses this shortcoming. The framework can be interpreted either as 1) an extension to the “zerotree model ” for wavelet coefficients that explicitly accounts for edge structure at fine scales, or as 2) a new atomic representation that synthesizes images using a sparse combination of wavelets and wedgeprints — anisotropic atoms that are adapted to edge singularities. Our approach enables a new type of quadtree pruning for piecewise smooth images, using zerotrees in uniformly smooth regions and wedgeprints in regions containing geometry. Using this framework, we develop a prototype image coder that has near-optimal asymptotic R-D performance D(R) (log R)²/R² for piecewise smooth C²/C² images. In addition, we extend the algorithm in order to compress natural images, exploring the practical problems that arise and attaining promising results in terms of mean-square error and visual quality

    Compressive System Identification of LTI and LTV ARX models

    No full text
    In this paper, we consider identifying Auto Regressive with eXternal input (ARX) models for both Linear Time-Invariant (LTI) and Linear Time-Variant (LTV) systems. We aim at doing the identification from the smallest possible number of observations. This is inspired by the field of Compressive Sensing (CS), and for this reason, we call this problem Compressive System Identification (CSI). In the case of LTI ARX systems, a system with a large number of inputs and unknown input delays on each channel can require a model structure with a large number of parameters, unless input delay estimation is performed. Since the complexity of input delay estimation increases exponentially in the number of inputs, this can be difficult for high dimensional systems. We show that in cases where the LTI system has possibly many inputs with different unknown delays, simultaneous ARX identification and input delay estimation is possible from few observations, even though this leaves an apparently ill-conditioned identification problem. We discuss identification guarantees and support our proposed method with simulations. We also consider identifying LTV ARX models. In particular, we consider systems with parameters that change only at a few time instants in a piecewise-constant manner where neither the change moments nor the number of changes is known a priori. The main technical novelty of our approach is in casting the identification problem as recovery of a block-sparse signal from an underdetermined set of linear equations. We suggest a random sampling approach for LTV identification, address the issue of identifiability and again support our approach with illustrative simulations
    corecore