23,871 research outputs found

    Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families

    Get PDF
    We propose Kernel Hamiltonian Monte Carlo (KMC), a gradient-free adaptive MCMC algorithm based on Hamiltonian Monte Carlo (HMC). On target densities where classical HMC is not an option due to intractable gradients, KMC adaptively learns the target's gradient structure by fitting an exponential family model in a Reproducing Kernel Hilbert Space. Computational costs are reduced by two novel efficient approximations to this gradient. While being asymptotically exact, KMC mimics HMC in terms of sampling efficiency, and offers substantial mixing improvements over state-of-the-art gradient free samplers. We support our claims with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC.Comment: 20 pages, 7 figure

    Reversible Jump Metropolis Light Transport using Inverse Mappings

    Full text link
    We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample space and their interactions with multiple sampling techniques. We observe that incorporating the sampling technique into the state of the Markov Chain, as done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of the chain to properly explore the path space, as transitions between sampling techniques lead to disruptive alterations of path samples. To address this issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework (RJMCMC) and introduce inverse sampling techniques that turn light paths into the random numbers that would produce them. This allows us to formulate a novel perturbation that can locally transition between sampling techniques without changing the geometry of the path, and we derive the correct acceptance probability using RJMCMC. We investigate how to generalize this concept to non-invertible sampling techniques commonly found in practice, and introduce probabilistic inverses that extend our perturbation to cover most sampling methods found in light transport simulations. Our theory reconciles the inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible Jump MLT (RJMLT). We verify the correctness of our implementation in canonical and practical scenarios and demonstrate improved temporal coherence, decrease in structured artifacts, and faster convergence on a wide variety of scenes

    On the Solution of Markov-switching Rational Expectations Models

    Get PDF
    This paper describes a method for solving a class of forward-looking Markov-switching Rational Expectations models under noisy measurement, by specifying the unobservable expectations component as a general-measurable function of the observable states of the system, to be determined optimally via stochastic control and filtering theory. Solution existence is proved by setting this function to the regime-dependent feedback control minimizing the mean-square deviation of the equilibrium path from the corresponding perfect-foresight autoregressive Markov jump state motion. As the exact expression of the conditional (rational) expectations term is derived both in finite and infinite horizon model formulations, no (asymptotic) stationarity assumptions are needed to solve forward the system, for only initial values knowledge is required. A simple sufficient condition for the mean-square stability of the obtained rational expectations equilibrium is also provided.Rational Expectations, Markov-switching dynamic systems, Dynamic programming, Time-varying Kalman filter

    MIMO-aided near-capacity turbo transceivers: taxonomy and performance versus complexity

    No full text
    In this treatise, we firstly review the associated Multiple-Input Multiple-Output (MIMO) system theory and review the family of hard-decision and soft-decision based detection algorithms in the context of Spatial Division Multiplexing (SDM) systems. Our discussions culminate in the introduction of a range of powerful novel MIMO detectors, such as for example Markov Chain assisted Minimum Bit-Error Rate (MC-MBER) detectors, which are capable of reliably operating in the challenging high-importance rank-deficient scenarios, where there are more transmitters than receivers and hence the resultant channel-matrix becomes non-invertible. As a result, conventional detectors would exhibit a high residual error floor. We then invoke the Soft-Input Soft-Output (SISO) MIMO detectors for creating turbo-detected two- or three-stage concatenated SDM schemes and investigate their attainable performance in the light of their computational complexity. Finally, we introduce the powerful design tools of EXtrinsic Information Transfer (EXIT)-charts and characterize the achievable performance of the diverse near- capacity SISO detectors with the aid of EXIT charts

    Group Importance Sampling for Particle Filtering and MCMC

    Full text link
    Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques have become very popular in signal processing over the last years. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in different works in the literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss the application of GIS into the Sequential Importance Resampling framework and show that Independent Multiple Try Metropolis schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The first one, named Group Metropolis Sampling method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings technique, where different parallel particle filters are jointly used to drive an MCMC algorithm. Different resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in different numerical experiments such as learning the hyperparameters of Gaussian Processes, two localization problems in a wireless sensor network (with synthetic and real data) and the tracking of vegetation parameters given satellite observations, where they are compared with several benchmark Monte Carlo techniques. Three illustrative Matlab demos are also provided.Comment: To appear in Digital Signal Processing. Related Matlab demos are provided at https://github.com/lukafree/GIS.gi
    corecore