40,139 research outputs found

    Nonlinear State-Space Models for Microeconometric Panel Data

    Get PDF
    In applied microeconometric panel data analyses, time-constant random effects and first-order Markov chains are the most prevalent structures to account for intertemporal correlations in limited dependent variable models. An example from health economics shows that the addition of a simple autoregressive error terms leads to a more plausible and parsimonious model which also captures the dynamic features better. The computational problems encountered in the estimation of such models - and a broader class formulated in the framework of nonlinear state space models - hampers their widespread use. This paper discusses the application of different nonlinear filtering approaches developed in the time-series literature to these models and suggests that a straightforward algorithm based on sequential Gaussian quadrature can be expected to perform well in this setting. This conjecture is impressively confirmed by an extensive analysis of the example application

    Second-Order Inference for the Mean of a Variable Missing at Random

    Get PDF
    We present a second-order estimator of the mean of a variable subject to missingness, under the missing at random assumption. The estimator improves upon existing methods by using an approximate second-order expansion of the parameter functional, in addition to the first-order expansion employed by standard doubly robust methods. This results in weaker assumptions about the convergence rates necessary to establish consistency, local efficiency, and asymptotic linearity. The general estimation strategy is developed under the targeted minimum loss-based estimation (TMLE) framework. We present a simulation comparing the sensitivity of the first and second order estimators to the convergence rate of the initial estimators of the outcome regression and missingness score. In our simulation, the second-order TMLE improved the coverage probability of a confidence interval by up to 85%. In addition, we present a first-order estimator inspired by a second-order expansion of the parameter functional. This estimator only requires one-dimensional smoothing, whereas implementation of the second-order TMLE generally requires kernel smoothing on the covariate space. The first-order estimator proposed is expected to have improved finite sample performance compared to existing first-order estimators. In our simulations, the proposed first-order estimator improved the coverage probability by up to 90%. We provide an illustration of our methods using a publicly available dataset to determine the effect of an anticoagulant on health outcomes of patients undergoing percutaneous coronary intervention. We provide R code implementing the proposed estimator

    Single-shot quantum memory advantage in the simulation of stochastic processes

    Full text link
    Stochastic processes underlie a vast range of natural and social phenomena. Some processes such as atomic decay feature intrinsic randomness, whereas other complex processes, e.g. traffic congestion, are effectively probabilistic because we cannot track all relevant variables. To simulate a stochastic system's future behaviour, information about its past must be stored and thus memory is a key resource. Quantum information processing promises a memory advantage for stochastic simulation that has been validated in recent proof-of-concept experiments. Yet, in all past works, the memory saving would only become accessible in the limit of a large number of parallel simulations, because the memory registers of individual quantum simulators had the same dimensionality as their classical counterparts. Here, we report the first experimental demonstration that a quantum stochastic simulator can encode the relevant information in fewer dimensions than any classical simulator, thereby achieving a quantum memory advantage even for an individual simulator. Our photonic experiment thus establishes the potential of a new, practical resource saving in the simulation of complex systems

    Reset-free Trial-and-Error Learning for Robot Damage Recovery

    Get PDF
    The high probability of hardware failures prevents many advanced robots (e.g., legged robots) from being confidently deployed in real-world situations (e.g., post-disaster rescue). Instead of attempting to diagnose the failures, robots could adapt by trial-and-error in order to be able to complete their tasks. In this situation, damage recovery can be seen as a Reinforcement Learning (RL) problem. However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously. In addition, most of the RL methods for robotics do not scale well with complex robots (e.g., walking robots) and either cannot be used at all or take too long to converge to a solution (e.g., hours of learning). In this paper, we introduce a novel learning algorithm called "Reset-free Trial-and-Error" (RTE) that (1) breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and (2) allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account. We evaluate our algorithm on a simulated wheeled robot, a simulated six-legged robot, and a real six-legged walking robot that are damaged in several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robots can recover most of their locomotion abilities in an environment with obstacles, and without any human intervention.Comment: 18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at https://youtu.be/IqtyHFrb3BU, code at https://github.com/resibots/chatzilygeroudis_2018_rt
    corecore