6 research outputs found
The Mixing Time of the Dikin Walk in a Polytope - A Simple Proof
We study the mixing time of the Dikin walk in a polytope - a random walk
based on the log-barrier from the interior point method literature. This walk,
and a close variant, were studied by Narayanan (2016) and Kannan-Narayanan
(2012). Bounds on its mixing time are important for algorithms for sampling and
optimization over polytopes. Here, we provide a simple proof of their result
that this random walk mixes in time O(mn) for an n-dimensional polytope
described using m inequalities.Comment: 5 pages, published in Operations Research Letter
Fast MCMC sampling algorithms on polytopes
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and
the John walk, for generating samples from the uniform distribution over a
polytope. Both random walks are sampling algorithms derived from interior point
methods. The former is based on volumetric-logarithmic barrier introduced by
Vaidya whereas the latter uses John's ellipsoids. We show that the Vaidya walk
mixes in significantly fewer steps than the logarithmic-barrier based Dikin
walk studied in past work. For a polytope in defined by
linear constraints, we show that the mixing time from a warm start is bounded
as , compared to the mixing time
bound for the Dikin walk. The cost of each step of the Vaidya walk is of the
same order as the Dikin walk, and at most twice as large in terms of constant
pre-factors. For the John walk, we prove an
bound on its mixing time and conjecture
that an improved variant of it could achieve a mixing time of
. Additionally, we propose variants
of the Vaidya and John walks that mix in polynomial time from a deterministic
starting point. The speed-up of the Vaidya walk over the Dikin walk are
illustrated in numerical examples.Comment: 86 pages, 9 figures, First two authors contributed equall
Dynamic Neuromechanical Sets for Locomotion
Most biological systems employ multiple redundant actuators, which is a complicated problem of controls and analysis. Unless assumptions about how the brain and body work together, and assumptions about how the body prioritizes tasks are applied, it is not possible to find the actuator controls. The purpose of this research is to develop computational tools for the analysis of arbitrary musculoskeletal models that employ redundant actuators. Instead of relying primarily on optimization frameworks and numerical methods or task prioritization schemes used typically in biomechanics to find a singular solution for actuator controls, tools for feasible sets analysis are instead developed to find the bounds of possible actuator controls. Previously in the literature, feasible sets analysis has been used in order analyze models assuming static poses. Here, tools that explore the feasible sets of actuator controls over the course of a dynamic task are developed. The cost-function agnostic methods of analysis developed in this work run parallel and in concert with other methods of analysis such as principle components analysis, muscle synergies theory and task prioritization. Researchers and healthcare professionals can gain greater insights into decision making during behavioral tasks by layering these other tools on top of feasible sets analysis
Recommended from our members
Fast MCMC algorithms, Stability and DeepTune
Drawing samples from a known distribution is a core computational challenge common to many disciplines, with applications in statistics, probability, operations research, and other areas involving stochastic models. In statistics, sampling methods are useful for both estimation and inference, including problems such as estimating expectations of desired quantities, computing probabilities of rare events, gauging volumes of particular sets, exploring posterior distributions and obtaining credible intervals etc.Facing massive high dimensional data, both computational efficiency and good statistical guarantees are more and more important in modern statistical and machine learning applications. In this thesis, centered around sampling algorithms, we consider the fundamental questions on their computational and statistical guarantees: How to design a fast sampling algorithm and how long should it be run? What are the statistical learning guarantee of these algorithms? Are there any trade-offs between computation and learning?To answer these questions, first we start with establishing non-asymptotic convergence guarantees for popular MCMC sampling algorithms in Bayesian literature: Metropolized Random Walk, Metropolis-adjusted Langevin algorithm and Hamiltonian Monte Carlo. To address a number of technical challenges arise enroute, we develop results based on the conductance profile in order to prove quantitative convergence guarantees general continuous state space Markov chains. Second, to confront a large class of constrained sampling problems, we introduce two new algorithms, Vaidya and John walks, to sample from polytope-constrained distributions with convergence guarantees. Third, we prove fundamental trade-off results between statistical learning performance and convergence rate of any iterative learning algorithm, including sample algorithms. The trade-off results allow us to show that a too stable algorithm can not converge too fast, and vice-versa. Finally, to help neuroscientists analyze their massive amount of brain data, we develop DeepTune, a stability-driven visualization and interpretation framework via optimization and sampling for the neural-network-based models of neurons in visual cortex