160,354 research outputs found

    On the Expectation-Maximization Algorithm for Rice-Rayleigh Mixtures With Application to Noise Parameter Estimation in Magnitude MR Datasets

    Get PDF
    Magnitude magnetic resonance (MR) images are noise-contaminated measurements of the true signal, and it is important to assess the noise in many applications. A recently introduced approach models the magnitude MR datum at each voxel in terms of a mixture of upto one Rayleigh and an a priori unspecified number of Rice components, all with a common noise parameter. The Expectation-Maximization (EM) algorithm was developed for parameter estimation, with the mixing component membership of each voxel as the missing observation. This paper revisits the EM algorithm by introducing more missing observations into the estimation problem such that the complete (observed and missing parts) dataset can be modeled in terms of a regular exponential family. Both the EM algorithm and variance estimation are then fairly straightforward without any need for potentially unstable numerical optimization methods. Compared to local neighborhood- and wavelet-based noise-parameter estimation methods, the new EMbased approach is seen to perform well not only on simulation datasets but also on physical phantom and clinical imaging data

    Parameter Estimation for Superimposed Weighted Exponentials

    Get PDF
    The approach of modeling measured signals as superimposed exponentials in white Gaussian noise is popular and effective. However, estimating the parameters of the assumed model is challenging, especially when the data record length is short, the signal strength is low, or the parameters are closely spaced. In this dissertation, we first review the most effective parameter estimation scheme for the superimposed exponential model: maximum likelihood. We then provide a historical review of the linear prediction approach to parameter estimation for the same model. After identifying the improvements made to linear prediction and demonstrating their weaknesses, we introduce a completely tractable and statistically sound modification to linear prediction that we call iterative generalized least squares. It is shown, that our algorithm works to minimize the exact maximum likelihood cost function for the superimposed exponential problem and is therefore, equivalent to the previously developed maximum likelihood approach. However, our algorithm is indeed linear prediction, and thus revives a methodology previously categorized as inferior to maximum likelihood. With our modification, the insight provided by linear prediction can be carried to actual applications. We demonstrate this by developing an effective algorithm for deep level transient spectroscopy analysis. The signal of deep level transient spectroscopy is not a straight forward superposition of exponentials. However, with our methodology, an estimator, based on the exact maximum likelihood cost function for the actual signal, is quickly derived. At the end of the dissertation, we verify that our estimator extends the current capabilities of deep level transient spectroscopy analysis

    Parameter estimation of multicomponent transient signals using deconvolution and arma modelling techniques

    Get PDF
    Parameter estimation of transient signals, having real decaying exponential constants, is a difficult but important problem that often arises in many areas of scientific disciplines. The frequency domain method of analysis that involves Gardner transformation and conventional inverse filtering often degrades the quality of the deconvolved data, leading to inaccurate results, especially for noisy data. An improved method that is based on the combination of Gardner transformation, optimal compensation deconvolution, and signal modelling techniques is suggested in this paper. In this method of analysis the exponential signal is converted to a convolution model whose input is a train of weighted delta function that contains the signal parameters to be determined. The resolution of the estimated decay rates is poor if the conventional fast Fourier transform (FFT) algorithm is used to analyse the resulting deconvolved data. Using an autoregressive moving (ARMA) model whose AR parameters are determined by solving high-order Yule–Walker equations (HOYWE) via the singular value decomposition (SVD) algorithm can alleviate this shortcoming. The effect of sampling conditions, noise level, number of components and relative sizes of the signal parameters on the performance of this modified method of analysis is examined in this paper. Simulation results show that high-resolution estimates of decay constants can be obtained when the above signal processing techniques are used to analyse multiexponential signals with varied signal-to-noise ratio (SNR). This approach also provides a graphical procedure for detecting and validating the number of exponential signals present in the data. Some computer simulation results are presented to justify the need for this modified method of analysis

    Generalized SURE for Exponential Families: Applications to Regularization

    Full text link
    Stein's unbiased risk estimate (SURE) was proposed by Stein for the independent, identically distributed (iid) Gaussian model in order to derive estimates that dominate least-squares (LS). In recent years, the SURE criterion has been employed in a variety of denoising problems for choosing regularization parameters that minimize an estimate of the mean-squared error (MSE). However, its use has been limited to the iid case which precludes many important applications. In this paper we begin by deriving a SURE counterpart for general, not necessarily iid distributions from the exponential family. This enables extending the SURE design technique to a much broader class of problems. Based on this generalization we suggest a new method for choosing regularization parameters in penalized LS estimators. We then demonstrate its superior performance over the conventional generalized cross validation approach and the discrepancy method in the context of image deblurring and deconvolution. The SURE technique can also be used to design estimates without predefining their structure. However, allowing for too many free parameters impairs the performance of the resulting estimates. To address this inherent tradeoff we propose a regularized SURE objective. Based on this design criterion, we derive a wavelet denoising strategy that is similar in sprit to the standard soft-threshold approach but can lead to improved MSE performance.Comment: to appear in the IEEE Transactions on Signal Processin
    • …
    corecore