1,542 research outputs found

    Workplace smoking ban effects in an heterogeneous smoking population

    Get PDF
    Many public policies, and especially health policies, are aimed at modifying individual behavior. This is particularly true of anti smoking policies. However, health behavior is highly heterogeneous, and so are individual responses to public policies such as taxes or restriction on use. We investigate the effect of a workplace smoking ban which took place in France in 2007. By its national aspect, the French reform offers a good case to study the effect of workplace smoking bans. Using original data on patients who consult tobacco cessation services, we show that the ban caused an increase in the demand for such services, and in the number of successful attempts to quit smoking. However, using survey data, we show that the ban had no measurable effect on overall prevalence in the general population. Models of quasi rational smoking behavior may offer an explanation for these two apparently contradictory findings.workplace smoking ban ; tobacco control ; smoking cessation ; impact evaluation

    Workplace smoking ban effects in an heterogeneous smoking population

    Get PDF
    Many public policies, and especially health policies, are aimed at modifying individual behavior. This is particularly true of anti smoking policies. However, health behavior is highly heterogeneous, and so are individual responses to public policies such as taxes or restriction on use. We investigate the effect of a workplace smoking ban which took place in France in 2007. By its national aspect, the French reform offers a good case to study the effect of workplace smoking bans. Using original data on patients who consult tobacco cessation services, we show that the ban caused an increase in the demand for such services, and in the number of successful attempts to quit smoking. However, using survey data, we show that the ban had no measurable effect on overall prevalence in the general population. Models of quasi rational smoking behavior may offer an explanation for these two apparently contradictory findings

    Multi-Agent Reinforcement Learning as a Computational Tool for Language Evolution Research: Historical Context and Future Challenges

    Get PDF
    Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL). Current contributions are however still relatively disconnected from the earlier theoretical and computational literature aiming at understanding how language might have emerged from a prelinguistic substance. The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research

    Diffusive model of current-in-plane-tunneling in double magnetic tunnel junctions

    Full text link
    We propose a model that describes current-in-plane tunneling transport in double barrier magnetic tunnel junctions in diffusive regime. Our study shows that specific features appear in double junctions that are described by introducing two typical length scales. The model may be used to measure the magnetoresistance and the resistance area product of both barriers in unpatterned stacks of double barrier magnetic tunnel junctions.Comment: 4 pages, 3 figure

    TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL

    Get PDF
    Training autonomous agents able to generalize to multiple tasks is a key target of Deep Reinforcement Learning (DRL) research. In parallel to improving DRL algorithms themselves, Automatic Curriculum Learning (ACL) study how teacher algorithms can train DRL agents more efficiently by adapting task selection to their evolving abilities. While multiple standard benchmarks exist to compare DRL agents, there is currently no such thing for ACL algorithms. Thus, comparing existing approaches is difficult, as too many experimental parameters differ from paper to paper. In this work, we identify several key challenges faced by ACL algorithms. Based on these, we present TeachMyAgent (TA), a benchmark of current ACL algorithms leveraging procedural task generation. It includes 1) challenge-specific unit-tests using variants of a procedural Box2D bipedal walker environment, and 2) a new procedural Parkour environment combining most ACL challenges, making it ideal for global performance assessment. We then use TeachMyAgent to conduct a comparative study of representative existing approaches, showcasing the competitiveness of some ACL algorithms that do not use expert knowledge. We also show that the Parkour environment remains an open problem. We open-source our environments, all studied ACL algorithms (collected from open-source code or re-implemented), and DRL students in a Python package available at https://github.com/flowersteam/TeachMyAgent

    SBMLtoODEjax: Efficient Simulation and Optimization of Biological Network Models in JAX

    Full text link
    Advances in bioengineering and biomedicine demand a deep understanding of the dynamic behavior of biological systems, ranging from protein pathways to complex cellular processes. Biological networks like gene regulatory networks and protein pathways are key drivers of embryogenesis and physiological processes. Comprehending their diverse behaviors is essential for tackling diseases, including cancer, as well as for engineering novel biological constructs. Despite the availability of extensive mathematical models represented in Systems Biology Markup Language (SBML), researchers face significant challenges in exploring the full spectrum of behaviors and optimizing interventions to efficiently shape those behaviors. Existing tools designed for simulation of biological network models are not tailored to facilitate interventions on network dynamics nor to facilitate automated discovery. Leveraging recent developments in machine learning (ML), this paper introduces SBMLtoODEjax, a lightweight library designed to seamlessly integrate SBML models with ML-supported pipelines, powered by JAX. SBMLtoODEjax facilitates the reuse and customization of SBML-based models, harnessing JAX's capabilities for efficient parallel simulations and optimization, with the aim to accelerate research in biological network analysis

    Explauto: an open-source Python library to study autonomous exploration in developmental robotics

    Get PDF
    International audienceWe present an open-source Python library, called Explauto, providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well as a platform for teaching and scientific diffusion. The library is available at this address: https://github.com/flowersteam/explaut

    The role of intrinsic motivations in learning sensorimotor vocal mappings: a developmental robotics study

    Get PDF
    International audienceLearning complex mappings between various modalities (typically articulatory, somato­sensory and auditory) is a central issue in computationally modeling speech acquisition. These mappings are generally non­linear and redundant, involving high dimensional sensorimotor spaces. Classical approaches consider two separate phases: a relatively pre-determined exploration phase analogous to infant babbling followed by an exploitation phase involving higher level communicative motivations. In this paper, we consider the problem as a developmental robotics one, in which an agent actively learns sensorimotor mappings of an articulatory vocal model. More specifically, we show how intrinsic motivations can allow the emergence of efficient exploration strategies, driving the way a learning agent will interact with its environment to collect an adequate learning set

    Learning how to reach various goals by autonomous interaction with the environment: unification and comparison of exploration strategies

    Get PDF
    International audienceIn the field of developmental robotics, we are particularly interested in the exploration strategies which can drive an agent to learn how to reach a wide variety of goals. In this paper, we unify and compare such strategies, recently shown to be efficient to learn complex non-linear redundant sensorimotor mappings. They combine two main principles. The first one concerns the space in which the learning agent chooses points to explore (motor space vs. goal space). Previous works have shown that learning redundant inverse models could be achieved more efficiently if exploration was driven by goal babbling, triggering reaching, rather than direct motor babbling. Goal babbling is especially efficient to learn highly redundant mappings (e.g the inverse kinematics of a arm). At each time step, the agent chooses a goal in a goal space (e.g uniformly), uses the current knowledge of an inverse model to infer a motor command to reach that goal, observes the corresponding consequence and updates its inverse model according to this new experience. This exploration strategy allows the agent to cover the goal space more efficiently, avoiding to waste time in redundant parts of the sensorimotor space (e.g executing many motor commands that actually reach the same goal). The second principle comes from the field of active learning, where exploration strategies are conceived as an optimization process. Samples in the input space (i.e motor space) are collected in order to minimize a given property of the learning process, e.g the uncertainty or the prediction error of the model. This allows the agent to focus on parts of the sensorimotor space in which exploration is supposed to improve the quality of the model

    Curiosity-driven phonetic learning

    Get PDF
    International audienceThis article studies how developmental phonetic learning can be guided by pure curiosity-driven exploration, also called intrinsically motivated exploration. Phonetic learning refers here to learning how to control a vocal tract to reach acoustic goals. We compare three different exploration strategies for learning the auditory-motor inverse model: random motor exploration, random goal selection with reaching, and curiosity-driven active goal selection with reaching. Using a realistic vocal tract model, we show how intrinsically motivated learning driven by competence progress can generate automatically developmental structure in both articulatory and auditory modalities, displaying patterns in line with some experimental data from infants
    corecore