2,365 research outputs found

    Sparse Modelling and Multi-exponential Analysis

    Get PDF
    The research fields of harmonic analysis, approximation theory and computer algebra are seemingly different domains and are studied by seemingly separated research communities. However, all of these are connected to each other in many ways. The connection between harmonic analysis and approximation theory is not accidental: several constructions among which wavelets and Fourier series, provide major insights into central problems in approximation theory. And the intimate connection between approximation theory and computer algebra exists even longer: polynomial interpolation is a long-studied and important problem in both symbolic and numeric computing, in the former to counter expression swell and in the latter to construct a simple data model. A common underlying problem statement in many applications is that of determining the number of components, and for each component the value of the frequency, damping factor, amplitude and phase in a multi-exponential model. It occurs, for instance, in magnetic resonance and infrared spectroscopy, vibration analysis, seismic data analysis, electronic odour recognition, keystroke recognition, nuclear science, music signal processing, transient detection, motor fault diagnosis, electrophysiology, drug clearance monitoring and glucose tolerance testing, to name just a few. The general technique of multi-exponential modeling is closely related to what is commonly known as the Padé-Laplace method in approximation theory, and the technique of sparse interpolation in the field of computer algebra. The problem statement is also solved using a stochastic perturbation method in harmonic analysis. The problem of multi-exponential modeling is an inverse problem and therefore may be severely ill-posed, depending on the relative location of the frequencies and phases. Besides the reliability of the estimated parameters, the sparsity of the multi-exponential representation has become important. A representation is called sparse if it is a combination of only a few elements instead of all available generating elements. In sparse interpolation, the aim is to determine all the parameters from only a small amount of data samples, and with a complexity proportional to the number of terms in the representation. Despite the close connections between these fields, there is a clear lack of communication in the scientific literature. The aim of this seminar is to bring researchers together from the three mentioned fields, with scientists from the varied application domains.Output Type: Meeting Repor

    Deep Gaussian Markov Random Fields for Graph-Structured Dynamical Systems

    Full text link
    Probabilistic inference in high-dimensional state-space models is computationally challenging. For many spatiotemporal systems, however, prior knowledge about the dependency structure of state variables is available. We leverage this structure to develop a computationally efficient approach to state estimation and learning in graph-structured state-space models with (partially) unknown dynamics and limited historical data. Building on recent methods that combine ideas from deep learning with principled inference in Gaussian Markov random fields (GMRF), we reformulate graph-structured state-space models as Deep GMRFs defined by simple spatial and temporal graph layers. This results in a flexible spatiotemporal prior that can be learned efficiently from a single time sequence via variational inference. Under linear Gaussian assumptions, we retain a closed-form posterior, which can be sampled efficiently using the conjugate gradient method, scaling favourably compared to classical Kalman filter based approachesComment: NeurIPS 2023; camera-ready versio

    Reconstruction from anisotropic random measurements

    Full text link
    Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. The current paper discusses the recently introduced Restricted Eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with subgaussian rows and non-trivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.Comment: 30 Page
    • …
    corecore