33 research outputs found

    A multiple-try Metropolis-Hastings algorithm with tailored proposals

    Full text link
    We present a new multiple-try Metropolis-Hastings algorithm designed to be especially beneficial when a tailored proposal distribution is available. The algorithm is based on a given acyclic graph GG, where one of the nodes in GG, kk say, contains the current state of the Markov chain and the remaining nodes contain proposed states generated by applying the tailored proposal distribution. The Metropolis-Hastings algorithm alternates between two types of updates. The first update type is using the tailored proposal distribution to generate new states in all nodes in GG except in node kk. The second update type is generating a new value for kk, thereby changing the value of the current state. We evaluate the effectiveness of the proposed scheme in an example with previously defined target and proposal distributions

    Ensemble updating of binary state vectors by maximising the expected number of unchanged components

    Full text link
    In recent years, several ensemble-based filtering methods have been proposed and studied. The main challenge in such procedures is the updating of a prior ensemble to a posterior ensemble at every step of the filtering recursions. In the famous ensemble Kalman filter, the assumption of a linear-Gaussian state space model is introduced in order to overcome this issue, and the prior ensemble is updated with a linear shift closely related to the traditional Kalman filter equations. In the current article, we consider how the ideas underlying the ensemble Kalman filter can be applied when the components of the state vectors are binary variables. While the ensemble Kalman filter relies on Gaussian approximations of the forecast and filtering distributions, we instead use first order Markov chains. To update the prior ensemble, we simulate samples from a distribution constructed such that the expected number of equal components in a prior and posterior state vector is maximised. We demonstrate the performance of our approach in a simulation example inspired by the movement of oil and water in a petroleum reservoir, where also a more na\"{i}ve updating approach is applied for comparison. Here, we observe that the Frobenius norm of the difference between the estimated and the true marginal filtering probabilities is reduced to the half with our method compared to the na\"{i}ve approach, indicating that our method is superior. Finally, we discuss how our methodology can be generalised from the binary setting to more complicated situations

    Approximate forward–backward algorithm for a switching linear Gaussian model

    Get PDF
    A hidden Markov model with two hidden layers is considered. The bottom layer is a Markov chain and given this the variables in the second hidden layer are assumed conditionally independent and Gaussian distributed. The observation process is Gaussian with mean values that are linear functions of the second hidden layer. The forward backward algorithm is not directly feasible for this model as the recursions result in a mixture of Gaussian densities where the number of terms grows exponentially with the length of the Markov chain. By dropping the less important Gaussian terms an approximate forward backward algorithm is defined. Thereby one gets a computationally feasible algorithm that generates samples from an approximation to the conditional distribution of the unobserved layers given the data. The approximate algorithm is also used as a proposal distribution in a Metropolis Hastings setting, and this gives high acceptance rates and good convergence and mixing properties. The model considered is related to what is known as switching linear dynamical systems. The proposed algorithm can in principle also be used for these models and the potential use of the algorithm is therefore large. In simulation examples the algorithm is used for the problem of seismic inversion. The simulations demonstrate the effectiveness and quality of the proposed approximate algorithm

    A Bayesian Model for Cross-Study Differential Gene Expression

    Get PDF
    In this paper we define a hierarchical Bayesian model for microarray expression data collected from several studies and use it to identify genes that show differential expression between two conditions. Key features include shrinkage across both genes and studies, and flexible modeling that allows for interactions between platforms and the estimated effect, as well as concordant and discordant differential expression across studies. We evaluated the performance of our model in a comprehensive fashion, using both artificial data, and a “split-study” validation approach that provides an agnostic assessment of the model's behavior not only under the null hypothesis, but also under a realistic alternative. The simulation results from the artificial data demonstrate the advantages of the Bayesian model. The 1 – AUC values for the Bayesian model are roughly half of the corresponding values for a direct combination of t- and SAM-statistics. Furthermore, the simulations provide guidelines for when the Bayesian model is most likely to be useful. Most noticeably, in small studies the Bayesian model generally outperforms other methods when evaluated by AUC, FDR, and MDR across a range of simulation parameters, and this difference diminishes for larger sample sizes in the individual studies. The split-study validation illustrates appropriate shrinkage of the Bayesian model in the absence of platform-, sample-, and annotation-differences that otherwise complicate experimental data analyses. Finally, we fit our model to four breast cancer studies employing different technologies (cDNA and Affymetrix) to estimate differential expression in estrogen receptor positive tumors versus negative ones. Software and data for reproducing our analysis are publicly available
    corecore