1,676 research outputs found

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2019 (ECMLPKDD 2019

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    A robot swarm assisting a human fire-fighter

    Get PDF
    Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings

    Internet of Robotic Things Intelligent Connectivity and Platforms

    Get PDF
    The Internet of Things (IoT) and Industrial IoT (IIoT) have developed rapidly in the past few years, as both the Internet and “things” have evolved significantly. “Things” now range from simple Radio Frequency Identification (RFID) devices to smart wireless sensors, intelligent wireless sensors and actuators, robotic things, and autonomous vehicles operating in consumer, business, and industrial environments. The emergence of “intelligent things” (static or mobile) in collaborative autonomous fleets requires new architectures, connectivity paradigms, trustworthiness frameworks, and platforms for the integration of applications across different business and industrial domains. These new applications accelerate the development of autonomous system design paradigms and the proliferation of the Internet of Robotic Things (IoRT). In IoRT, collaborative robotic things can communicate with other things, learn autonomously, interact safely with the environment, humans and other things, and gain qualities like self-maintenance, self-awareness, self-healing, and fail-operational behavior. IoRT applications can make use of the individual, collaborative, and collective intelligence of robotic things, as well as information from the infrastructure and operating context to plan, implement and accomplish tasks under different environmental conditions and uncertainties. The continuous, real-time interaction with the environment makes perception, location, communication, cognition, computation, connectivity, propulsion, and integration of federated IoRT and digital platforms important components of new-generation IoRT applications. This paper reviews the taxonomy of the IoRT, emphasizing the IoRT intelligent connectivity, architectures, interoperability, and trustworthiness framework, and surveys the technologies that enable the application of the IoRT across different domains to perform missions more efficiently, productively, and completely. The aim is to provide a novel perspective on the IoRT that involves communication among robotic things and humans and highlights the convergence of several technologies and interactions between different taxonomies used in the literature.publishedVersio

    MARLUI: Multi-Agent Reinforcement Learning for Adaptive UIs

    Full text link
    Adaptive user interfaces (UIs) automatically change an interface to better support users' tasks. Recently, machine learning techniques have enabled the transition to more powerful and complex adaptive UIs. However, a core challenge for adaptive user interfaces is the reliance on high-quality user data that has to be collected offline for each task. We formulate UI adaptation as a multi-agent reinforcement learning problem to overcome this challenge. In our formulation, a user agent mimics a real user and learns to interact with a UI. Simultaneously, an interface agent learns UI adaptations to maximize the user agent's performance. The interface agent learns the task structure from the user agent's behavior and, based on that, can support the user agent in completing its task. Our method produces adaptation policies that are learned in simulation only and, therefore, does not need real user data. Our experiments show that learned policies generalize to real users and achieve on par performance with data-driven supervised learning baselines

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics
    corecore