2,119 research outputs found

    Further results on independent Metropolis-Hastings-Klein sampling

    Get PDF
    Sampling from a lattice Gaussian distribution is emerging as an important problem in coding and cryptography. This paper gives a further analysis of the independent Metropolis-Hastings-Klein (MHK) algorithm we presented at ISIT 2015. We derive the exact spectral gap of the induced Markov chain, which dictates the convergence rate of the independent MHK algorithm. Then, we apply the independent MHK algorithm to lattice decoding and obtained the decoding complexity for solving the CVP as Õ(e∄Bx-c∄2 / mini ∄b̂i∄2). Finally, the tradeoff between decoding radius and complexity is also established

    Lattice Gaussian Sampling by Markov Chain Monte Carlo: Bounded Distance Decoding and Trapdoor Sampling

    Get PDF
    Sampling from the lattice Gaussian distribution plays an important role in various research fields. In this paper, the Markov chain Monte Carlo (MCMC)-based sampling technique is advanced in several fronts. Firstly, the spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is derived, which is then extended to Peikert's algorithm and rejection sampling; we show that independent MHK exhibits faster convergence. Then, the performance of bounded distance decoding using MCMC is analyzed, revealing a flexible trade-off between the decoding radius and complexity. MCMC is further applied to trapdoor sampling, again offering a trade-off between security and complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK) algorithm is proposed to enhance the convergence rate. The proposed algorithms allow parallel implementation, which is beneficial for practical applications.Comment: submitted to Transaction on Information Theor

    Adaptive hybrid Metropolis-Hastings samplers for DSGE models

    Get PDF
    Bayesian inference for DSGE models is typically carried out by single block random walk Metropolis, involving very high computing costs. This paper combines two features, adaptive independent Metropolis-Hastings and parallelisation, to achieve large computational gains in DSGE model estimation. The history of the draws is used to continuously improve a t-copula proposal distribution, and an adaptive random walk step is inserted at predetermined intervals to escape difficult points. In linear estimation applications to a medium scale (23 parameters) and a large scale (51 parameters) DSGE model, the computing time per independent draw is reduced by 85% and 65-75% respectively. In a stylised nonlinear estimation example (13 parameters) the reduction is 80%. The sampler is also better suited to parallelisation than random walk Metropolis or blocking strategies, so that the effective computational gains, i.e. the reduction in wall-clock time per independent equivalent draw, can potentially be much larger.Markov Chain Monte Carlo (MCMC); Adaptive Metropolis-Hastings; Parallel algorithm; DSGE model; Copula

    Sliced lattice Gaussian sampling: convergence improvement and decoding optimization

    Get PDF
    Sampling from the lattice Gaussian distribution has emerged as a key problem in coding and decoding while Markov chain Monte Carlo (MCMC) methods from statistics offer an effective way to solve it. In this paper, the sliced lattice Gaussian sampling algorithm is proposed to further improve the convergence performance of the Markov chain targeting at lattice Gaussian sampling. We demonstrate that the Markov chain arising from it is uniformly ergodic, namely, it converges exponentially fast to the stationary distribution. Meanwhile, the convergence rate of the underlying Markov chain is also investigated, and we show the proposed sliced sampling algorithm entails a better convergence performance than the independent Metropolis-Hastings-Klein (IMHK) sampling algorithm. On the other hand, the decoding performance based on the proposed sampling algorithm is analyzed, where the optimization with respect to the standard deviation σ>0 of the target lattice Gaussian distribution is given. After that, a judicious mechanism based on distance judgement and dynamic updating for choosing σ is proposed for a better decoding performance. Finally, simulation results based on multiple-input multiple-output (MIMO) detection are presented to confirm the performance gain by the convergence enhancement and the parameter optimization

    A general approach to Bayesian portfolio optimization

    Get PDF
    We develop a general approach to portfolio optimization taking account of estimation risk and stylized facts of empirical finance. This is done within a Bayesian framework. The approximation of the posterior distribution of the unknown model parameters is based on a parallel tempering algorithm. The portfolio optimization is done using the first two moments of the predictive discrete asset return distribution. For illustration purposes we apply our method to empirical stock market data where daily asset logreturns are assumed to follow an orthogonal MGARCH process with t-distributed perturbations. Our results are compared with other portfolios suggested by popular optimization strategies. --Bayesian portfolio optimization,Gordin's condition,Markov chain Monte Carlo,Stylized facts

    Metropolis-Hastings prefetching algorithms

    Get PDF
    Prefetching is a simple and general method for single-chain parallelisation of the Metropolis-Hastings algorithm based on the idea of evaluating the posterior in parallel and ahead of time. In this paper improved Metropolis-Hastings prefetching algorithms are presented and evaluated. It is shown how to use available information to make better predictions of the future states of the chain and increase the efficiency of prefetching considerably. The optimal acceptance rate for the prefetching random walk Metropolis-Hastings algorithm is obtained for a special case and it is shown to decrease in the number of processors employed. The performance of the algorithms is illustrated using a well-known macroeconomic model. Bayesian estimation of DSGE models, linearly or nonlinearly approximated, is identified as a potential area of application for prefetching methods. The generality of the proposed method, however, suggests that it could be applied in many other contexts as well.Prefetching; Metropolis-Hastings; Parallel Computing; DSGE models; Optimal acceptance rate

    Training Restricted Boltzmann Machines on Word Observations

    Get PDF
    The restricted Boltzmann machine (RBM) is a flexible tool for modeling complex data, however there have been significant computational difficulties in using RBMs to model high-dimensional multinomial observations. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundreds of thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue by employing a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible and using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter
    • 

    corecore