1,971 research outputs found
Bayesian inference for inverse problems
Traditionally, the MaxEnt workshops start by a tutorial day. This paper
summarizes my talk during 2001'th workshop at John Hopkins University. The main
idea in this talk is to show how the Bayesian inference can naturally give us
all the necessary tools we need to solve real inverse problems: starting by
simple inversion where we assume to know exactly the forward model and all the
input model parameters up to more realistic advanced problems of myopic or
blind inversion where we may be uncertain about the forward model and we may
have noisy data. Starting by an introduction to inverse problems through a few
examples and explaining their ill posedness nature, I briefly presented the
main classical deterministic methods such as data matching and classical
regularization methods to show their limitations. I then presented the main
classical probabilistic methods based on likelihood, information theory and
maximum entropy and the Bayesian inference framework for such problems. I show
that the Bayesian framework, not only generalizes all these methods, but also
gives us natural tools, for example, for inferring the uncertainty of the
computed solutions, for the estimation of the hyperparameters or for handling
myopic or blind inversion problems. Finally, through a deconvolution problem
example, I presented a few state of the art methods based on Bayesian inference
particularly designed for some of the mass spectrometry data processing
problems.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum
Entropy Methods, B. Fry (Ed.), AIP Proceedings. 20pages, 13 Postscript
figure
Statistical Properties and Applications of Empirical Mode Decomposition
Signal analysis is key to extracting information buried in noise. The decomposition of signal is a data analysis tool for determining the underlying physical components of a processed data set. However, conventional signal decomposition approaches such as wavelet analysis, Wagner-Ville, and various short-time Fourier spectrograms are inadequate to process real world signals. Moreover, most of the given techniques require \emph{a prior} knowledge of the processed signal, to select the proper decomposition basis, which makes them improper for a wide range of practical applications. Empirical Mode Decomposition (EMD) is a non-parametric and adaptive basis driver that is capable of breaking-down non-linear, non-stationary signals into an intrinsic and finite components called Intrinsic Mode Functions (IMF). In addition, EMD approximates a dyadic filter that isolates high frequency components, e.g. noise, in higher index IMFs. Despite of being widely used in different applications, EMD is an ad hoc solution. The adaptive performance of EMD comes at the expense of formulating a theoretical base. Therefore, numerical analysis is usually adopted in literature to interpret the behavior.
This dissertation involves investigating statistical properties of EMD and utilizing the outcome to enhance the performance of signal de-noising and spectrum sensing systems. The novel contributions can be broadly summarized in three categories: a statistical analysis of the probability distributions of the IMFs and a suggestion of Generalized Gaussian distribution (GGD) as a best fit distribution; a de-noising scheme based on a null-hypothesis of IMFs utilizing the unique filter behavior of EMD; and a novel noise estimation approach that is used to shift semi-blind spectrum sensing techniques into fully-blind ones based on the first IMF. These contributions are justified statistically and analytically and include comparison with other state of art techniques
Probabilistic Latent Variable Models as Nonnegative Factorizations
This paper presents a family of probabilistic latent variable models that can be used for analysis of nonnegative data. We show that there are strong ties between nonnegative matrix
factorization and this family, and provide some straightforward extensions which can help in dealing with shift invariances, higher-order decompositions and sparsity constraints. We argue through these extensions that the use of this approach allows for rapid development of complex statistical models for analyzing nonnegative data
Régularisation de problÚmes inverses linéaires avec opérateur inconnu
Dans cette thĂšse, nous Ă©tudions des mĂ©thodes de rĂ©solution pour diffĂ©rents types de problĂšmes inverses linĂ©aires. L'objectif est d'estimer un paramĂštre de dimension infinie (typiquement une fonction ou une mesure) Ă partir de l'observation bruitĂ©e de son image par un opĂ©rateur linĂ©aire. Nous nous intĂ©ressons plus prĂ©cisĂ©ment Ă des problĂšmes inverses dits discrets, pour lesquels l'opĂ©rateur est Ă valeurs dans un espace de dimension finie. Pour ce genre de problĂšme, la nonÂinjectivitĂ© de l'opĂ©rateur rend impossible l'identification du paramĂštre Ă partir de l'observation. Un aspect de la rĂ©gularisation consiste alors Ă dĂ©terminer un critĂšre de sĂ©lection d'une solution parmi un ensemble de valeurs possibles. Nous Ă©tudions en particulier des applications de la mĂ©thode du maximum d'entropie sur la moyenne, qui est une mĂ©thode BayĂ©sienne de rĂ©gularisation permettant de dĂ©finir un critĂšre de sĂ©lection Ă partir d'information a priori. Nous traitons Ă©galement des questions de stabilitĂ© en problĂšmes inverses sous des hypothĂšses de compacitĂ© de l'opĂ©rateur, dans un problĂšme de rĂ©gression non-paramĂ©trique avec observations indirectes.We study regularization methods for different kinds of linear inverse problems. The objective is to estimate an infinite dimensional parameter (typically a function or a measure) from the noisy observation of its image through a linear operator. We are interested more specifically to discret inverse problems, for which the operator takes values in a finite dimensional space. For this kind of problems, the non-injectivity of the operator makes impossible the identification of the parameter from the observation. An aspect of the regularization is then to determine a criterion to select a solution among a set of possible values. We study in particular some applications of the maximum entropy on the mean method, which is a Bayesian regularization method that allows to choose a solution from prior informations. We also treat stability issues in inverse problems under compacity assumptions on the operator, in a general nonparametric regression framework with indirect observations
- âŠ