397 research outputs found

    Sampling and Reconstruction of Spatial Signals

    Get PDF
    Digital processing of signals f may start from sampling on a discrete set Γ, f →(f(ϒη))ϒηεΓ. The sampling theory is one of the most basic and fascinating topics in applied mathematics and in engineering sciences. The most well known form is the uniform sampling theorem for band-limited/wavelet signals, that gives a framework for converting analog signals into sequences of numbers. Over the past decade, the sampling theory has undergone a strong revival and the standard sampling paradigm is extended to non-bandlimited signals including signals in reproducing kernel spaces (RKSs), signals with finite rate of innovation (FRI) and sparse signals, and to nontraditional sampling methods, such as phaseless sampling. In this dissertation, we first consider the sampling and Galerkin reconstruction in a reproducing kernel space. The fidelity measure of perceptual signals, such as acoustic and visual signals, might not be well measured by least squares. In the first part of this dissertation, we introduce a fidelity measure depending on a given sampling scheme and propose a Galerkin method in Banach space setting for signal reconstruction. We show that the proposed Galerkin method provides a quasi-optimal approximation, and the corresponding Galerkin equations could be solved by an iterative approximation-projection algorithm in a reproducing kernel subspace of Lp. A spatially distributed network contains a large amount of agents with limited sensing, data processing, and communication capabilities. Recent technological advances have opened up possibilities to deploy spatially distributed networks for signal sampling and reconstruction. We introduce a graph structure for a distributed sampling and reconstruction system by coupling agents in a spatially distributed network with innovative positions of signals. We split a distributed sampling and reconstruction system into a family of overlapping smaller subsystems, and we show that the stability of the sensing matrix holds if and only if its quasi-restrictions to those subsystems have l_2 uniform stability. This new stability criterion could be pivotal for the design of a robust distributed sampling and reconstruction system against supplement, replacement and impairment of agents, as we only need to check the uniform stability of affected subsystems. We also propose an exponentially convergent distributed algorithm for signal reconstruction, that provides a suboptimal approximation to the original signal in the presence of bounded sampling noises. Phase retrieval (Phaseless Sampling and Reconstruction) arises in various fields of science and engineering. It consists of reconstructing a signal of interest from its magnitude measurements. Sampling in shift-invariant spaces is a realistic model for signals with smooth spectrum. We consider phaseless sampling and reconstruction of real-valued signals in a shift-invariant space from their magnitude measurements on the whole Euclidean space and from their phaseless samples taken on a discrete set with finite sampling density. We find an equivalence between nonseparability of signals in a shift-invariant space and their phase retrievability with phaseless samples taken on the whole Euclidean space. We also introduce an undirected graph to a signal and use connectivity of the graph to characterize the nonseparability of high-dimensional signals. Under the local complement property assumption on a shift-invariant space, we find a discrete set with finite sampling density such that signals in shift-invariant spaces, that are determined by their magnitude measurements on the whole Euclidean space, can be reconstructed in a stable way from their phaseless samples taken on that discrete set. We also propose a reconstruction algorithm which provides a suboptimal approximation to the original signal when its noisy phaseless samples are available only

    A constructive theory of sampling for image synthesis using reproducing kernel bases

    Get PDF
    Sampling a scene by tracing rays and reconstructing an image from such pointwise samples is fundamental to computer graphics. To improve the efficacy of these computations, we propose an alternative theory of sampling. In contrast to traditional formulations for image synthesis, which appeal to nonconstructive Dirac deltas, our theory employs constructive reproducing kernels for the correspondence between continuous functions and pointwise samples. Conceptually, this allows us to obtain a common mathematical formulation of almost all existing numerical techniques for image synthesis. Practically, it enables novel sampling based numerical techniques designed for light transport that provide considerably improved performance per sample. We exemplify the practical benefits of our formulation with three applications: pointwise transport of color spectra, projection of the light energy density into spherical harmonics, and approximation of the shading equation from a photon map. Experimental results verify the utility of our sampling formulation, with lower numerical error rates and enhanced visual quality compared to existing techniques

    A Guide to Localized Frames and Applications to Galerkin-like Representations of Operators

    Full text link
    This chapter offers a detailed survey on intrinsically localized frames and the corresponding matrix representation of operators. We re-investigate the properties of localized frames and the associated Banach spaces in full detail. We investigate the representation of operators using localized frames in a Galerkin-type scheme. We show how the boundedness and the invertibility of matrices and operators are linked and give some sufficient and necessary conditions for the boundedness of operators between the associated Banach spaces.Comment: 32 page

    A Stein variational Newton method

    Full text link
    Stein variational gradient descent (SVGD) was recently proposed as a general purpose nonparametric variational inference algorithm [Liu & Wang, NIPS 2016]: it minimizes the Kullback-Leibler divergence between the target distribution and its approximation by implementing a form of functional gradient descent on a reproducing kernel Hilbert space. In this paper, we accelerate and generalize the SVGD algorithm by including second-order information, thereby approximating a Newton-like iteration in function space. We also show how second-order information can lead to more effective choices of kernel. We observe significant computational gains over the original SVGD algorithm in multiple test cases.Comment: 18 pages, 7 figure

    RBF multiscale collocation for second order elliptic boundary value problems

    Get PDF
    In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multi-level fashion, each level using compactly supported radial basis functions of smaller scale on an increasingly fine mesh. On each level, standard symmetric collocation is employed. A convergence theory is given, which builds on recent theoretical advances for multiscale approximation using compactly supported radial basis functions. We are able to show that the convergence is linear in the number of levels. We also discuss the condition numbers of the arising systems and the effect of simple, diagonal preconditioners, now proving rigorously previous numerical observations

    Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance

    Full text link
    Data-driven reduced-order models often fail to make accurate forecasts of high-dimensional nonlinear dynamical systems that are sensitive along coordinates with low-variance because such coordinates are often truncated, e.g., by proper orthogonal decomposition, kernel principal component analysis, and autoencoders. Such systems are encountered frequently in shear-dominated fluid flows where non-normality plays a significant role in the growth of disturbances. In order to address these issues, we employ ideas from active subspaces to find low-dimensional systems of coordinates for model reduction that balance adjoint-based information about the system's sensitivity with the variance of states along trajectories. The resulting method, which we refer to as covariance balancing reduction using adjoint snapshots (CoBRAS), is analogous to balanced truncation with state and adjoint-based gradient covariance matrices replacing the system Gramians and obeying the same key transformation laws. Here, the extracted coordinates are associated with an oblique projection that can be used to construct Petrov-Galerkin reduced-order models. We provide an efficient snapshot-based computational method analogous to balanced proper orthogonal decomposition. This also leads to the observation that the reduced coordinates can be computed relying on inner products of state and gradient samples alone, allowing us to find rich nonlinear coordinates by replacing the inner product with a kernel function. In these coordinates, reduced-order models can be learned using regression. We demonstrate these techniques and compare to a variety of other methods on a simple, yet challenging three-dimensional system and a nonlinear axisymmetric jet flow simulation with 10510^5 state variables
    • …
    corecore