2,691 research outputs found

    Exploiting Symmetry and Heuristic Demonstrations in Off-policy Reinforcement Learning for Robotic Manipulation

    Full text link
    Reinforcement learning demonstrates significant potential in automatically building control policies in numerous domains, but shows low efficiency when applied to robot manipulation tasks due to the curse of dimensionality. To facilitate the learning of such tasks, prior knowledge or heuristics that incorporate inherent simplification can effectively improve the learning performance. This paper aims to define and incorporate the natural symmetry present in physical robotic environments. Then, sample-efficient policies are trained by exploiting the expert demonstrations in symmetrical environments through an amalgamation of reinforcement and behavior cloning, which gives the off-policy learning process a diverse yet compact initiation. Furthermore, it presents a rigorous framework for a recent concept and explores its scope for robot manipulation tasks. The proposed method is validated via two point-to-point reaching tasks of an industrial arm, with and without an obstacle, in a simulation experiment study. A PID controller, which tracks the linear joint-space trajectories with hard-coded temporal logic to produce interim midpoints, is used to generate demonstrations in the study. The results of the study present the effect of the number of demonstrations and quantify the magnitude of behavior cloning to exemplify the possible improvement of model-free reinforcement learning in common manipulation tasks. A comparison study between the proposed method and a traditional off-policy reinforcement learning algorithm indicates its advantage in learning performance and potential value for applications

    Detection of entangled states supported by reinforcement learning

    Full text link
    Discrimination of entangled states is an important element of quantum enhanced metrology. This typically requires low-noise detection technology. Such a challenge can be circumvented by introducing nonlinear readout process. Traditionally, this is realized by reversing the very dynamics that generates the entangled state, which requires a full control over the system evolution. In this work, we present nonlinear readout of highly entangled states by employing reinforcement learning (RL) to manipulate the spin-mixing dynamics in a spin-1 atomic condensate. The RL found results in driving the system towards an unstable fixed point, whereby the (to be sensed) phase perturbation is amplified by the subsequent spin-mixing dynamics. Working with a condensate of 10900 {87}^Rb atoms, we achieve a metrological gain of 6.97 dB beyond the classical precision limit. Our work would open up new possibilities in unlocking the full potential of entanglement caused quantum enhancement in experiments

    Analytic studies of local-severe-storm observables by satellites

    Get PDF
    Attention is concentrated on the exceptionally violet whirlwind, often characterized by a fairly vertical axis of rotation. For a cylindrical polar coordinate system with axis coincident with the axis of rotation, the secondary flow involves the radial and axial velocity components. The thesis advanced is, first, that a violent whirlwind is characterized by swirl speeds relative to the axis of rotation on the order of 90 m/s, with 100 m/s being close to an upper bound. This estimate is based on interpretation of funnel-cloud shape (which also suggests properties of the radial profile of swirl, as well as the maximum magnitude); an error assessment of the funnel-cloud interpretation procedure is developed. Second, computation of ground-level pressure deficits achievable from typical tornado-spawning ambients by idealized thermohydrostatic processes suggests that a two-cell structure is required to sustain such large speeds

    Roadmap on Machine learning in electronic structure

    Get PDF
    AbstractIn recent years, we have been witnessing a paradigm shift in computational materials science. In fact, traditional methods, mostly developed in the second half of the XXth century, are being complemented, extended, and sometimes even completely replaced by faster, simpler, and often more accurate approaches. The new approaches, that we collectively label by machine learning, have their origins in the fields of informatics and artificial intelligence, but are making rapid inroads in all other branches of science. With this in mind, this Roadmap article, consisting of multiple contributions from experts across the field, discusses the use of machine learning in materials science, and share perspectives on current and future challenges in problems as diverse as the prediction of materials properties, the construction of force-fields, the development of exchange correlation functionals for density-functional theory, the solution of the many-body problem, and more. In spite of the already numerous and exciting success stories, we are just at the beginning of a long path that will reshape materials science for the many challenges of the XXIth century

    A brief survey of deep reinforcement learning

    Get PDF
    Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field

    Deep Reinforcement Learning: A Brief Survey

    Get PDF
    Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higher-level understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field

    Enhanced backscatter of optical beams reflected in turbulent air

    Full text link
    Optical beams propagating through air acquire phase distortions from turbulent fluctuations in the refractive index. While these distortions are usually deleterious to propagation, beams reflected in a turbulent medium can undergo a local recovery of spatial coherence and intensity enhancement referred to as enhanced backscatter (EBS). Using a combination of lab-scale experiments and simulations, we investigate the EBS of optical beams reflected from corner cubes and rough surfaces, and identify the regimes in which EBS is most distinctly observed.Comment: 10 pages, 8 figure
    • …
    corecore