63 research outputs found

    A formulation of the relaxation phenomenon for lane changing dynamics in an arbitrary car following model

    Full text link
    Lane changing dynamics are an important part of traffic microsimulation and are vital for modeling weaving sections and merge bottlenecks. However, there is often much more emphasis placed on car following and gap acceptance models, whereas lane changing dynamics such as tactical, cooperation, and relaxation models receive comparatively little attention. This paper develops a general relaxation model which can be applied to an arbitrary parametric or nonparametric microsimulation model. The relaxation model modifies car following dynamics after a lane change, when vehicles can be far from equilibrium. Relaxation prevents car following models from reacting too strongly to the changes in space headway caused by lane changing, leading to more accurate and realistic simulated trajectories. We also show that relaxation is necessary for correctly simulating traffic breakdown with realistic values of capacity drop

    Variance Reduction for Score Functions Using Optimal Baselines

    Full text link
    Many problems involve the use of models which learn probability distributions or incorporate randomness in some way. In such problems, because computing the true expected gradient may be intractable, a gradient estimator is used to update the model parameters. When the model parameters directly affect a probability distribution, the gradient estimator will involve score function terms. This paper studies baselines, a variance reduction technique for score functions. Motivated primarily by reinforcement learning, we derive for the first time an expression for the optimal state-dependent baseline, the baseline which results in a gradient estimator with minimum variance. Although we show that there exist examples where the optimal baseline may be arbitrarily better than a value function baseline, we find that the value function baseline usually performs similarly to an optimal baseline in terms of variance reduction. Moreover, the value function can also be used for bootstrapping estimators of the return, leading to additional variance reduction. Our results give new insight and justification for why value function baselines and the generalized advantage estimator (GAE) work well in practice

    Non-tuberculous mycobacterial pulmonary disease (NTM-PD): Epidemiology, diagnosis and multidisciplinary management

    Get PDF
    Non-tuberculous mycobacteria (NTM) are ubiquitous environmental organisms that can cause significant disease in both immunocompromised and immunocompetent individuals. The incidence of NTM pulmonary disease (NTM-PD) is rising globally. Diagnostic challenges persist and treatment efficacy is variable. This article provides an overview of NTM-PD for clinicians. We discuss how common it is, who is at risk, how it is diagnosed and the multidisciplinary approach to its clinical management. [Abstract copyright: Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Gradient Estimation and Variance Reduction in Stochastic and Deterministic Models

    No full text
    237 pagesIt seems that in the current age, computers, computation, and data have an increasingly important role to play in scientific research and discovery. This is reflected in part by the rise of machine learning and artificial intelligence, which have become great areas of interest not just for computer science but also for many other fields of study. More generally, there have been trends moving towards the use of bigger, more complex and higher capacity models. It also seems that stochastic models, and stochastic variants of existing deterministic models, have become important research directions in various fields. For all of these types of models, gradient-based optimization remains as the dominant paradigm for model fitting, control, and more. This dissertation considers unconstrained, nonlinear optimization problems, with a focus on the gradient itself, that key quantity which enables the solution of such problems. In chapter 1, we introduce the notion of reverse differentiation, a term which describes the body of techniques which enables the efficient computation of gradients. We cover relevant techniques both in the deterministic and stochastic cases. We present a new framework for calculating the gradient of problems which involve both deterministic and stochastic elements. The resulting gradient estimator can be applied in virtually any situation (including many where automatic differentiation alone fails due to the fact that it does not give gradient terms due to score functions). In chapter 2, we analyze the properties of the gradient estimator, with a focus on those properties which are typically assumed in convergence proofs of optimization algorithms. That chapter attempts to bridge some of the gap between what is assumed in a mathematical optimization proof, and what we need to prove to get a convergence result for a specific model/problem formulation. Chapter 3 gives various examples of applying our new gradient estimator. We further explore the idea of working with piecewise continuous models, that is, models with distinct branches and if statements which define what specific branch to use. We also discuss model elements that cause problems in gradient-based optimization, and how to reformulate a model to avoid such issues. Lastly, chapter 4 presents a new optimal baseline for use in the variance reduction of gradient estimators involving score functions. We forsee that methodology as becoming a key part of gradient estimation, as the presence of score functions is a key feature in our gradient estimator. In somewhat of a departure from the previous chapters, chapters 5 and 6 present two studies in transportation, one of the core emerging application areas which have motivated this dissertation

    Kelly, John Maurice

    No full text

    Heuston, Robert Francis Vere

    No full text
    • …
    corecore