2 research outputs found

    On the Continuity of Characteristic Functionals and Sparse Stochastic Modeling

    Full text link
    The characteristic functional is the infinite-dimensional generalization of the Fourier transform for measures on function spaces. It characterizes the statistical law of the associated stochastic process in the same way as a characteristic function specifies the probability distribution of its corresponding random variable. Our goal in this work is to lay the foundations of the innovation model, a (possibly) non-Gaussian probabilistic model for sparse signals. This is achieved by using the characteristic functional to specify sparse stochastic processes that are defined as linear transformations of general continuous-domain white noises (also called innovation processes). We prove the existence of a broad class of sparse processes by using the Minlos-Bochner theorem. This requires a careful study of the regularity properties, especially the boundedness in Lp-spaces, of the characteristic functional of the innovations. We are especially interested in the functionals that are only defined for p<1 since they appear to be associated with the sparser kind of processes. Finally, we apply our main theorem of existence to two specific subclasses of processes with specific invariance properties.Comment: 24 page

    Reconstruction Of Biomedical Images And Sparse Stochastic Modeling

    No full text
    We propose a novel statistical formulation of the image-reconstruction problem from noisy linear measurements. We derive an extended family of MAP estimators based on the theory of continuous-domain sparse stochastic processes. We highlight the crucial roles of the whitening operator and of the Levy exponent of the innovations which controls the sparsity of the model. While our family of estimators includes the traditional methods of Tikhonov and total-variation (TV) regularization as particular cases, it opens the door to a much broader class of potential functions (associated with infinitely divisible priors) that are inherently sparse and typically nonconvex. We also provide an algorithmic scheme-naturally suggested by our framework-that can handle arbitrary potential functions. Further, we consider the reconstruction of simulated MRI data and illustrate that the designed estimators can bring significant improvement in reconstruction performance
    corecore