44 research outputs found

    Improving exploration in reinforcement learning with temporally correlated stochasticity

    Get PDF
    Reinforcement learning is a useful ap-proach to solve machine learning problems by self-exploration when training samples are not provided.However, researchers usually ignore the importance ofthe choice of exploration noise. In this paper, I showthat temporally self-correlated exploration stochastic-ity, generated by Ornstein-Uhlenbeck process, can sig-nificantly enhance the performance of reinforcementlearning tasks by improving exploration

    Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown distinct advantages, e.g., solving memory-dependent tasks and meta-learning. However, little effort has been spent on improving RNN architectures and on understanding the underlying neural mechanisms for performance gain. In this paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical results show that the network can autonomously learn to abstract sub-goals and can self-develop an action hierarchy using internal dynamics in a challenging continuous control task. Furthermore, we show that the self-developed compositionality of the network enhances faster re-learning when adapting to a new task that is a re-composition of previously learned sub-goals, than when starting from scratch. We also found that improved performance can be achieved when neural activities are subject to stochastic rather than deterministic dynamics

    Variational Recurrent Models for Solving Partially Observable Control Tasks

    Get PDF
    In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.Comment: Published as a conference paper at the Eighth International Conference on Learning Representations (ICLR 2020

    Habits and goals in synergy: a variational Bayesian framework for behavior

    Full text link
    How to behave efficiently and flexibly is a central problem for understanding biological agents and creating intelligent embodied AI. It has been well known that behavior can be classified as two types: reward-maximizing habitual behavior, which is fast while inflexible; and goal-directed behavior, which is flexible while slow. Conventionally, habitual and goal-directed behaviors are considered handled by two distinct systems in the brain. Here, we propose to bridge the gap between the two behaviors, drawing on the principles of variational Bayesian theory. We incorporate both behaviors in one framework by introducing a Bayesian latent variable called "intention". The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal. Building on this idea, we present a novel Bayesian framework for modeling behaviors. Our proposed framework enables skill sharing between the two kinds of behaviors, and by leveraging the idea of predictive coding, it enables an agent to seamlessly generalize from habitual to goal-directed behavior without requiring additional training. The proposed framework suggests a fresh perspective for cognitive science and embodied AI, highlighting the potential for greater integration between habitual and goal-directed behaviors

    Variational Recurrent Models for Solving Partially Observable Control Tasks

    Get PDF
    In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner

    Saikosaponin A Alleviates Symptoms of Attention Deficit Hyperactivity Disorder through Downregulation of DAT and Enhancing BDNF Expression in Spontaneous Hypertensive Rats

    Get PDF
    The disturbed dopamine availability and brain-derived neurotrophic factor (BDNF) expression are due in part to be associated with attention deficit hyperactivity disorder (ADHD). In this study, we investigated the therapeutical effect of saikosaponin a (SSa) isolated from Bupleurum Chinese DC, against spontaneously hypertensive rat (SHR) model of ADHD. Methylphenidate and SSa were orally administered for 3 weeks. Activity was assessed by open-field test and Morris water maze test. Dopamine (DA) and BDNF were determined in specific brain regions. The mRNA or protein expression of tyrosine hydroxylase (TH), dopamine transporter (DAT), and vesicles monoamine transporter (VMAT) was also studied. Both MPH and SSa reduced hyperactivity and improved the spatial learning memory deficit in SHRs. An increased DA concentration in the prefrontal cortex (PFC) and striatum was also observed after treating with the SSa. The increased DA concentration may partially be attributed to the decreased mRNA and protein expression of DAT in PFC while SSa exhibited no significant effects on the mRNA expression of TH and VMAT in PFC of SHRs. In addition, BDNF expression in SHRs was also increased after treating with SSa or MPH. The obtained result suggested that SSa may be a potential drug for treating ADHD

    Structure and seasonal changes in atmospheric boundary layer on coast of the east Antarctic continent

    Get PDF
    The temperature, humidity, and vertical distribution of ozone in the Antarctic atmospheric boundary layer(ABL) and their seasonal changes are analyzed, by using the high-resolution profile data obtained during the International Polar Year 2008 to 2009 at Zhongshan Station, to further the understanding of the structure and processes of the ABL. The results show that the frequency of the convective boundary layer in the warm season accounts for 84% of its annual occurrence frequency. The frequency of the stable boundary layer in the cold season accounts for 71% of its annual occurrence frequency. A neutral boundary layer appears rarely. The average altitude of the convective boundary layer determined by the parcel method is 600 m; this is 200 to 300 m higher than that over inland Antarctica. The average altitude of the top of the boundary layer determined by the potential temperature gradient and humidity gradient is 1 200 m in the warm season and 1 500 m in the cold season. The vertical structures of ozone and specific humidity in the ABL exhibit obvious seasonal changes. The specific humidity is very high with greater vertical gradient in the warm season and very low with a lesser gradient in the cold season under 2 000 m. The atmospheric ozone in the ABL is consumed by photochemical processes in the warm season, which results in a slight difference in altitude. The sub-highest ozone center is located in the boundary layer, indicating that the ozone transferred from the stratosphere to the troposphere reaches the low boundary layer during October and November in Antarctica

    Littlest Higgs model and associated ZH production at high energy e+ee^{+}e^{-} collider

    Full text link
    In the context of the littlest Higgs (LH) model, we consider the Higgs strahlung process e+eZHe^{+}e^{-}\to ZH . We find that the correction effects on this process mainly come from the heavy photon BB'. If we take the mixing angle parameter cc in the range of 0.75 - 1, the contributions of the heavy gauge boson W3W_{3}' is larger than 6%. In most of the parameter space, the deviation of the total production cross section σtot\sigma^{tot} from its SM value is larger than 5%, which may be detected in the future high energy e+ee^{+}e^{-} collider (LC) experiments. The future LC experiments could test the LH model by measuring the cross section of the process e+eZHe^{+}e^{-}\to ZH .Comment: 13 pages, 3 figure
    corecore