16 research outputs found

    Generalized SURE for Exponential Families: Applications to Regularization

    Full text link
    Stein's unbiased risk estimate (SURE) was proposed by Stein for the independent, identically distributed (iid) Gaussian model in order to derive estimates that dominate least-squares (LS). In recent years, the SURE criterion has been employed in a variety of denoising problems for choosing regularization parameters that minimize an estimate of the mean-squared error (MSE). However, its use has been limited to the iid case which precludes many important applications. In this paper we begin by deriving a SURE counterpart for general, not necessarily iid distributions from the exponential family. This enables extending the SURE design technique to a much broader class of problems. Based on this generalization we suggest a new method for choosing regularization parameters in penalized LS estimators. We then demonstrate its superior performance over the conventional generalized cross validation approach and the discrepancy method in the context of image deblurring and deconvolution. The SURE technique can also be used to design estimates without predefining their structure. However, allowing for too many free parameters impairs the performance of the resulting estimates. To address this inherent tradeoff we propose a regularized SURE objective. Based on this design criterion, we derive a wavelet denoising strategy that is similar in sprit to the standard soft-threshold approach but can lead to improved MSE performance.Comment: to appear in the IEEE Transactions on Signal Processin

    Unsupervised bayesian convex deconvolution based on a field with an explicit partition function

    Full text link
    This paper proposes a non-Gaussian Markov field with a special feature: an explicit partition function. To the best of our knowledge, this is an original contribution. Moreover, the explicit expression of the partition function enables the development of an unsupervised edge-preserving convex deconvolution method. The method is fully Bayesian, and produces an estimate in the sense of the posterior mean, numerically calculated by means of a Monte-Carlo Markov Chain technique. The approach is particularly effective and the computational practicability of the method is shown on a simple simulated example

    Approche variationnelle pour le calcul bayésien dans les problèmes inverses en imagerie

    Get PDF
    Dans une approche bayésienne non supervisée pour la résolution d'un problème inverse, on cherche à estimer conjointement la grandeur inconnue f et les paramètres 0. Ceci se fait en utilisant la loi a posteriori conjointe p(f, θ|g). L'expression de cette loi est souvent complexe et son exploration et le calcul des estimateurs bayésiens nécessitent soit l'optimisation des critères souvent non convexes ou le calcul d'espérances des lois non gaussiennes multivariées. Dans tous ces cas, il y a souvent besoin de faire des approximations. Nous avions déjà exploré les possibilités de l'approximation de Laplace et les méthodes d'échantillonnage MCMC. Ici, nous explorons l'approximation de p(f, θ|g) par une loi séparable en f et en 0. Ceci permet de proposer des algorithmes itératifs plus abordables en coût de calcul, surtout, si on choisit ces lois approchantes dans des familles des lois exponentielles. Le principal objet de ce papier est de fournir des détails des différents algorithmes que l'on obtient pour différents choix de ces familles

    Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells

    Full text link
    Adherent cells exert traction forces on to their environment, which allows them to migrate, to maintain tissue integrity, and to form complex multicellular structures. This traction can be measured in a perturbation-free manner with traction force microscopy (TFM). In TFM, traction is usually calculated via the solution of a linear system, which is complicated by undersampled input data, acquisition noise, and large condition numbers for some methods. Therefore, standard TFM algorithms either employ data filtering or regularization. However, these approaches require a manual selection of filter- or regularization parameters and consequently exhibit a substantial degree of subjectiveness. This shortcoming is particularly serious when cells in different conditions are to be compared because optimal noise suppression needs to be adapted for every situation, which invariably results in systematic errors. Here, we systematically test the performance of new methods from computer vision and Bayesian inference for solving the inverse problem in TFM. We compare two classical schemes, L1- and L2-regularization, with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. Using artificial data and experimental data, we show that these methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Thus, Bayesian methods can mitigate the considerable uncertainty inherent in comparing cellular traction forces

    Improved Convolutive and Under-Determined Blind Audio Source Separation with MRF Smoothing

    Get PDF
    Convolutive and under-determined blind audio source separation from noisy recordings is a challenging problem. Several computational strategies have been proposed to address this problem. This study is concerned with several modifications to the expectation-minimization-based algorithm, which iteratively estimates the mixing and source parameters. This strategy assumes that any entry in each source spectrogram is modeled using superimposed Gaussian components, which are mutually and individually independent across frequency and time bins. In our approach, we resolve this issue by considering a locally smooth temporal and frequency structure in the power source spectrograms. Local smoothness is enforced by incorporating a Gibbs prior in the complete data likelihood function, which models the interactions between neighboring spectrogram bins using a Markov random field. Simulations using audio files derived from stereo audio source separation evaluation campaign 2008 demonstrate high efficiency with the proposed improvement

    Approche variationnelle pour le calcul bay\'esien dans les probl\`emes inverses en imagerie

    Full text link
    In a non supervised Bayesian estimation approach for inverse problems in imaging systems, one tries to estimate jointly the unknown image pixels ff and the hyperparameters θ\theta given the observed data gg and a model MM linking these quantities. This is, in general, done through the joint posterior law p(f,θg;M)p(f,\theta|g;M). The expression of this joint law is often very complex and its exploration through sampling and computation of the point estimators such as MAP and posterior means need either optimization of or integration of multivariate probability laws. In any of these cases, we need to do approximations. Laplace approximation and sampling by MCMC are two approximation methods, respectively analytical and numerical, which have been used before with success for this task. In this paper, we explore the possibility of approximating this joint law by a separable one in ff and in θ\theta. This gives the possibility of developing iterative algorithms with more reasonable computational cost, in particular, if the approximating laws are choosed in the exponential conjugate families. The main objective of this paper is to give details of different algorithms we obtain with different choices of these families. To illustrate more in detail this approach, we consider the case of image restoration by simple or myopic deconvolution with separable, simple markovian or hidden markovian models.Comment: 31 pages, 2 figures, had been submitted to "Revue Traitement du signal", but not accepte

    Measurement Error Adjustment in the Offset Variable of a Poisson Model

    Get PDF
    Motor vehicle accidents is the main cause of death among teenagers in the US. Car crashes are the leading cause of death among teenagers. The Graduated Driver Licensing (GDL) program is one effective policy for reducing the number of teenage car crashes. Our study focuses on how the GDL program adopted by the state of Michigan in 1997 took effect. We use Poisson regression with spatially dependent random effects to model the county-level teenage car crash counts and consider a measurement error model for the offset as the offset variable is mismeasured. The total teenage population in the county-level is widely used to be a proxy for the teenage driver population when modelling the teenage driver fatality rate. In our case, the data for the teenage driver population are not available in the county-level but the state-level in Michigan. Thus, a measurement error issue arises in the offset variable of our Poisson model, we propose including a measurement error model to account for the difference between the teenage population and teenage driver population. To the best of our knowledge, there is no existing literature to adjust for an offset variable when it is measured with error, and limited research has addressed the measurement errors in the context of spatial data. In this thesis, a Berkson measurement error model with spatial random effects have been applied to adjust the offset variable in a Bayesian framework, and the Bayesian MCMC sampling is implemented in rstan. To check whether the adjustment for the offset variable will bring any differences to our model, we have conducted real data analysis. We found the coefficient of T (time) becomes less significant after the adjustment, which leads to a new finding for the GDL -- the reduction number of teen-drivers can help explain the partial effectiveness of this policy

    Advances in single frame image recovery

    Get PDF
    This thesis tackles a problem of recovering a high resolution image from a single compressed frame. A new image-prior that is devised based on Pearson type VII density is integrated with a Markov Random Field model which has desirable robustness properties. A fully automated hyper-parameter estimation procedure for this approach is developed, which makes it advantageous in comparison with alternatives. Although this recovery algorithm is very simple to implement, it achieves statistically significant improvements over previous results in under-determined problem settings, and it is able to recover images that contain texture. This advancement opens up the opportunities for several potential extensions, of which we pursue two: (i) Most of previous work does not consider any specific extra information to recover the signal. Thus, this thesis exploits the similarity between the signal of interest and a consecutive motionless frame to address this problem. Additional information of similarity that is available is incorporated into a probabilistic image-prior based on the Pearson type VII Markov Random Field model. Results on both synthetic and real data of Magnetic Resonance Imaging (MRI) images demonstrate the effectiveness of our method in both compressed setting and classical super-resolution experiments. (ii) This thesis also presents a multi-task approach for signal recovery by sharing higher-level hyperparameters which do not relate directly to the actual content of the signals of interest but only to their statistical characteristics. Our approach leads to a very simple model and algorithm that can be used to simultaneously recover multipl
    corecore