1,971 research outputs found

    Bayesian inference for inverse problems

    Get PDF
    Traditionally, the MaxEnt workshops start by a tutorial day. This paper summarizes my talk during 2001'th workshop at John Hopkins University. The main idea in this talk is to show how the Bayesian inference can naturally give us all the necessary tools we need to solve real inverse problems: starting by simple inversion where we assume to know exactly the forward model and all the input model parameters up to more realistic advanced problems of myopic or blind inversion where we may be uncertain about the forward model and we may have noisy data. Starting by an introduction to inverse problems through a few examples and explaining their ill posedness nature, I briefly presented the main classical deterministic methods such as data matching and classical regularization methods to show their limitations. I then presented the main classical probabilistic methods based on likelihood, information theory and maximum entropy and the Bayesian inference framework for such problems. I show that the Bayesian framework, not only generalizes all these methods, but also gives us natural tools, for example, for inferring the uncertainty of the computed solutions, for the estimation of the hyperparameters or for handling myopic or blind inversion problems. Finally, through a deconvolution problem example, I presented a few state of the art methods based on Bayesian inference particularly designed for some of the mass spectrometry data processing problems.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 20pages, 13 Postscript figure

    Statistical Properties and Applications of Empirical Mode Decomposition

    Get PDF
    Signal analysis is key to extracting information buried in noise. The decomposition of signal is a data analysis tool for determining the underlying physical components of a processed data set. However, conventional signal decomposition approaches such as wavelet analysis, Wagner-Ville, and various short-time Fourier spectrograms are inadequate to process real world signals. Moreover, most of the given techniques require \emph{a prior} knowledge of the processed signal, to select the proper decomposition basis, which makes them improper for a wide range of practical applications. Empirical Mode Decomposition (EMD) is a non-parametric and adaptive basis driver that is capable of breaking-down non-linear, non-stationary signals into an intrinsic and finite components called Intrinsic Mode Functions (IMF). In addition, EMD approximates a dyadic filter that isolates high frequency components, e.g. noise, in higher index IMFs. Despite of being widely used in different applications, EMD is an ad hoc solution. The adaptive performance of EMD comes at the expense of formulating a theoretical base. Therefore, numerical analysis is usually adopted in literature to interpret the behavior. This dissertation involves investigating statistical properties of EMD and utilizing the outcome to enhance the performance of signal de-noising and spectrum sensing systems. The novel contributions can be broadly summarized in three categories: a statistical analysis of the probability distributions of the IMFs and a suggestion of Generalized Gaussian distribution (GGD) as a best fit distribution; a de-noising scheme based on a null-hypothesis of IMFs utilizing the unique filter behavior of EMD; and a novel noise estimation approach that is used to shift semi-blind spectrum sensing techniques into fully-blind ones based on the first IMF. These contributions are justified statistically and analytically and include comparison with other state of art techniques

    Probabilistic Latent Variable Models as Nonnegative Factorizations

    Get PDF
    This paper presents a family of probabilistic latent variable models that can be used for analysis of nonnegative data. We show that there are strong ties between nonnegative matrix factorization and this family, and provide some straightforward extensions which can help in dealing with shift invariances, higher-order decompositions and sparsity constraints. We argue through these extensions that the use of this approach allows for rapid development of complex statistical models for analyzing nonnegative data

    Régularisation de problÚmes inverses linéaires avec opérateur inconnu

    Get PDF
    Dans cette thÚse, nous étudions des méthodes de résolution pour différents types de problÚmes inverses linéaires. L'objectif est d'estimer un paramÚtre de dimension infinie (typiquement une fonction ou une mesure) à partir de l'observation bruitée de son image par un opérateur linéaire. Nous nous intéressons plus précisément à des problÚmes inverses dits discrets, pour lesquels l'opérateur est à valeurs dans un espace de dimension finie. Pour ce genre de problÚme, la non­injectivité de l'opérateur rend impossible l'identification du paramÚtre à partir de l'observation. Un aspect de la régularisation consiste alors à déterminer un critÚre de sélection d'une solution parmi un ensemble de valeurs possibles. Nous étudions en particulier des applications de la méthode du maximum d'entropie sur la moyenne, qui est une méthode Bayésienne de régularisation permettant de définir un critÚre de sélection à partir d'information a priori. Nous traitons également des questions de stabilité en problÚmes inverses sous des hypothÚses de compacité de l'opérateur, dans un problÚme de régression non-paramétrique avec observations indirectes.We study regularization methods for different kinds of linear inverse problems. The objective is to estimate an infinite dimensional parameter (typically a function or a measure) from the noisy observation of its image through a linear operator. We are interested more specifically to discret inverse problems, for which the operator takes values in a finite dimensional space. For this kind of problems, the non-injectivity of the operator makes impossible the identification of the parameter from the observation. An aspect of the regularization is then to determine a criterion to select a solution among a set of possible values. We study in particular some applications of the maximum entropy on the mean method, which is a Bayesian regularization method that allows to choose a solution from prior informations. We also treat stability issues in inverse problems under compacity assumptions on the operator, in a general nonparametric regression framework with indirect observations
    • 

    corecore