1,103,557 research outputs found

    A new rejection sampling method without using hat function

    Get PDF
    This paper proposes a new exact simulation method, which simulates a realisation from a proposal density and then uses exact simulation of a Langevin diffusion to check whether the proposal should be accepted or rejected. Comparing to the existing coupling from the past method, the new method does not require constructing fast coalescence Markov chains. Comparing to the existing rejection sampling method, the new method does not require the proposal density function to bound the target density function. The new method is much more efficient than existing methods for certain problems. An application on exact simulation of the posterior of finite mixture models is presented

    Experience-Based Planning with Sparse Roadmap Spanners

    Full text link
    We present an experienced-based planning framework called Thunder that learns to reduce computation time required to solve high-dimensional planning problems in varying environments. The approach is especially suited for large configuration spaces that include many invariant constraints, such as those found with whole body humanoid motion planning. Experiences are generated using probabilistic sampling and stored in a sparse roadmap spanner (SPARS), which provides asymptotically near-optimal coverage of the configuration space, making storing, retrieving, and repairing past experiences very efficient with respect to memory and time. The Thunder framework improves upon past experience-based planners by storing experiences in a graph rather than in individual paths, eliminating redundant information, providing more opportunities for path reuse, and providing a theoretical limit to the size of the experience graph. These properties also lead to improved handling of dynamically changing environments, reasoning about optimal paths, and reducing query resolution time. The approach is demonstrated on a 30 degrees of freedom humanoid robot and compared with the Lightning framework, an experience-based planner that uses individual paths to store past experiences. In environments with variable obstacles and stability constraints, experiments show that Thunder is on average an order of magnitude faster than Lightning and planning from scratch. Thunder also uses 98.8% less memory to store its experiences after 10,000 trials when compared to Lightning. Our framework is implemented and freely available in the Open Motion Planning Library.Comment: Submitted to ICRA 201

    Weak-form market efficiency and calendar anomalies for Eastern Europe equity markets

    Get PDF
    In this paper we test the weak form of the efficient market hypothesis for Central and Eastern Europe (CEE) equity markets for the period 1999-2009. To test weak form efficiency in the markets this study uses, autocorrelation analysis, runs test, and variance ratio test. We find that stock markets of the Central and Eastern Europe do not follow a random walk process. This is an important finding for the CEE markets as an informed investor can identify mispriced assets in the markets by studying the past prices in these markets. We also test the presence of daily anomalies for the same group of stock markets using a basic model and a more advanced Generalized Autoregressive Conditional Heteroskedasticity in Mean (GARCH-M) model. Results indicate that day-of-the-week effect is not evident in most markets except for some. Overall results indicate that some of these markets are not weak form efficient and an informed investor can make abnormal profits by studying the past prices of the assets in these markets.Emerging stock markets, day-of-the-week effect , market efficiency, variance ratio test, GARCH-M.

    A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models

    Get PDF
    This paper provides a step-by-step guide to estimating discrete choice dynamic programming (DDP) models using the Bayesian Dynamic Programming algorithm developed by Imai Jain and Ching (2008) (IJC). The IJC method combines the DDP solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm, which solves the DDP model and estimates its structural parameters simultaneously. The main computational advantage of this estimation algorithm is the efficient use of information obtained from the past iterations. In the conventional Nested Fixed Point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the Bayesian Dynamic Programming algorithm extensively uses the computational results obtained from the past iterations to help solving the DDP model at the current iterated parameter values. Consequently, it significantly alleviates the computational burden of estimating a DDP model. We carefully discuss how to implement the algorithm in practice, and use a simple dynamic store choice model to illustrate how to apply this algorithm to obtain parameter estimates.Bayesian Dynamic Programming, Discrete Choice Dynamic Programming, Markov Chain Monte Carlo

    SMART GROWTH AND SUSTAINABLE TRANSPORTATION: CAN WE GET THERE FROM HERE?

    Get PDF
    This article focuses on the development of smart growth, which includes sustainable transportation policies and more efficient uses of land. Smart growth was developed to combat some of the negative consequences of transportation and land use laws developed in the past fifty years which created dependence on motor vehicles. The article examines these policies and their inconsistency with smart growth, considers steps to be taken toward a more efficient transportation system and the difficulties in invoking these changes, and uses Atlanta as a case study in the opportunities and challenges for smart growth and sustainable transportation policies

    An Efficient Bayesian Inference Framework for Coalescent-Based Nonparametric Phylodynamics

    Full text link
    Phylodynamics focuses on the problem of reconstructing past population size dynamics from current genetic samples taken from the population of interest. This technique has been extensively used in many areas of biology, but is particularly useful for studying the spread of quickly evolving infectious diseases agents, e.g.,\ influenza virus. Phylodynamics inference uses a coalescent model that defines a probability density for the genealogy of randomly sampled individuals from the population. When we assume that such a genealogy is known, the coalescent model, equipped with a Gaussian process prior on population size trajectory, allows for nonparametric Bayesian estimation of population size dynamics. While this approach is quite powerful, large data sets collected during infectious disease surveillance challenge the state-of-the-art of Bayesian phylodynamics and demand computationally more efficient inference framework. To satisfy this demand, we provide a computationally efficient Bayesian inference framework based on Hamiltonian Monte Carlo for coalescent process models. Moreover, we show that by splitting the Hamiltonian function we can further improve the efficiency of this approach. Using several simulated and real datasets, we show that our method provides accurate estimates of population size dynamics and is substantially faster than alternative methods based on elliptical slice sampler and Metropolis-adjusted Langevin algorithm
    • …
    corecore