Skip to main content
Article thumbnail
Location of Repository

By Lecturer Prof, Alistair Sinclair, Scribes An and Anindya De

Abstract

Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 2.1 Markov Chains We begin by reviewing the basic goal in the Markov Chain Monte Carlo paradigm. Assume a finite state space Ω and a weight function w: Ω → ℜ +. Our goal is to design a sampling process which samples every element x ∈ Ω with the probability w(x) where Z = x∈Ω w(x) is the normalization factor. Often times, we don’t know the normalization factor Z apriori, and in some problems, the real goal is to estimate Z. With the aforementioned motivation, we now define a Markov chain. Definition 2.1 A Markov chain on Ω is a stochastic process {X0, X1,..., Xt,...} with each Xi ∈ Ω such tha

Year: 2011
OAI identifier: oai:CiteSeerX.psu:10.1.1.187.9171
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.berkeley.edu/%7E... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.