1,242 research outputs found
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction
Unsupervised pre-training strategies have proven to be highly effective in
natural language processing and computer vision. Likewise, unsupervised
reinforcement learning (RL) holds the promise of discovering a variety of
potentially useful behaviors that can accelerate the learning of a wide array
of downstream tasks. Previous unsupervised RL approaches have mainly focused on
pure exploration and mutual information skill learning. However, despite the
previous attempts, making unsupervised RL truly scalable still remains a major
open challenge: pure exploration approaches might struggle in complex
environments with large state spaces, where covering every possible transition
is infeasible, and mutual information skill learning approaches might
completely fail to explore the environment due to the lack of incentives. To
make unsupervised RL scalable to complex, high-dimensional environments, we
propose a novel unsupervised RL objective, which we call Metric-Aware
Abstraction (METRA). Our main idea is, instead of directly covering the entire
state space, to only cover a compact latent space that is metrically
connected to the state space by temporal distances. By learning to move in
every direction in the latent space, METRA obtains a tractable set of diverse
behaviors that approximately cover the state space, being scalable to
high-dimensional environments. Through our experiments in five locomotion and
manipulation environments, we demonstrate that METRA can discover a variety of
useful behaviors even in complex, pixel-based environments, being the first
unsupervised RL method that discovers diverse locomotion behaviors in
pixel-based Quadruped and Humanoid. Our code and videos are available at
https://seohong.me/projects/metra
Gesture Recognition in Robotic Surgery: a Review
OBJECTIVE: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE: This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field
Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning
It is of significance for an agent to learn a widely applicable and
general-purpose policy that can achieve diverse goals including images and text
descriptions. Considering such perceptually-specific goals, the frontier of
deep reinforcement learning research is to learn a goal-conditioned policy
without hand-crafted rewards. To learn this kind of policy, recent works
usually take as the reward the non-parametric distance to a given goal in an
explicit embedding space. From a different viewpoint, we propose a novel
unsupervised learning approach named goal-conditioned policy with intrinsic
motivation (GPIM), which jointly learns both an abstract-level policy and a
goal-conditioned policy. The abstract-level policy is conditioned on a latent
variable to optimize a discriminator and discovers diverse states that are
further rendered into perceptually-specific goals for the goal-conditioned
policy. The learned discriminator serves as an intrinsic reward function for
the goal-conditioned policy to imitate the trajectory induced by the
abstract-level policy. Experiments on various robotic tasks demonstrate the
effectiveness and efficiency of our proposed GPIM method which substantially
outperforms prior techniques.Comment: Accepted by AAAI-2
Explore, discover and learn: unsupervised discovery of state-covering skills
Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation -- they discover options that provide a poor coverage of the state space. In light of this, we propose 'Explore, Discover and Learn' (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.This work was partially supported by the Spanish Ministry of Science and Innovation and the European Regional Development Fund under contracts TEC2016-75976-R and TIN2015-65316-P, by the BSC-CNS Severo Ochoa program SEV-2015-0493, and by Generalitat de Catalunya under contracts 2017-SGR-1414 and 2017-DI-011. Víctor Campos was supported by Obra Social “la Caixa” through La Caixa-Severo Ochoa International Doctoral Fellowship program.Peer ReviewedPostprint (published version
- …