22,392 research outputs found
I Can See Your Aim: Estimating User Attention From Gaze For Handheld Robot Collaboration
This paper explores the estimation of user attention in the setting of a
cooperative handheld robot: a robot designed to behave as a handheld tool but
that has levels of task knowledge. We use a tool-mounted gaze tracking system,
which, after modelling via a pilot study, we use as a proxy for estimating the
attention of the user. This information is then used for cooperation with users
in a task of selecting and engaging with objects on a dynamic screen. Via a
video game setup, we test various degrees of robot autonomy from fully
autonomous, where the robot knows what it has to do and acts, to no autonomy
where the user is in full control of the task. Our results measure performance
and subjective metrics and show how the attention model benefits the
interaction and preference of users.Comment: this is a corrected version of the one that was published at IROS
201
Human Swarm Interaction: An Experimental Study of Two Types of Interaction with Foraging Swarms
In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales better to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obstacles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors
Making sense of words: a robotic model for language abstraction
Building robots capable of acting independently in unstructured environments is still a challenging task for roboticists. The capability to comprehend and produce language in a 'human-like' manner represents a powerful tool for the autonomous interaction of robots with human beings, for better understanding situations and exchanging information during the execution of tasks that require cooperation. In this work, we present a robotic model for grounding abstract action words (i.e. USE, MAKE) through the hierarchical organization of terms directly linked to perceptual and motor skills of a humanoid robot. Experimental results have shown that the robot, in response to linguistic commands, is capable of performing the appropriate behaviors on objects. Results obtained in case of inconsistency between the perceptual and linguistic inputs have shown that the robot executes the actions elicited by the seen object
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Home alone: autonomous extension and correction of spatial representations
In this paper we present an account
of the problems faced by a mobile robot given
an incomplete tour of an unknown environment,
and introduce a collection of techniques which can
generate successful behaviour even in the presence
of such problems. Underlying our approach is the
principle that an autonomous system must be motivated
to act to gather new knowledge, and to validate
and correct existing knowledge. This principle is
embodied in Dora, a mobile robot which features
the aforementioned techniques: shared representations,
non-monotonic reasoning, and goal generation
and management. To demonstrate how well this
collection of techniques work in real-world situations
we present a comprehensive analysis of the Dora
system’s performance over multiple tours in an indoor
environment. In this analysis Dora successfully
completed 18 of 21 attempted runs, with all but
3 of these successes requiring one or more of the
integrated techniques to recover from problems
An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications
We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems
- …