14 research outputs found
A Study on Multirobot Quantile Estimation in Natural Environments
Quantiles of a natural phenomena can provide scientists with an important
understanding of different spreads of concentrations. When there are several
available robots, it may be advantageous to pool resources in a collaborative
way to improve performance. A multirobot team can be difficult to practically
bring together and coordinate. To this end, we present a study across several
axes of the impact of using multiple robots to estimate quantiles of a
distribution of interest using an informative path planning formulation. We
measure quantile estimation accuracy with increasing team size to understand
what benefits result from a multirobot approach in a drone exploration task of
analyzing the algae concentration in lakes. We additionally perform an analysis
on several parameters, including the spread of robot initial positions, the
planning budget, and inter-robot communication, and find that while using more
robots generally results in lower estimation error, this benefit is achieved
under certain conditions. We present our findings in the context of real field
robotic applications and discuss the implications of the results and
interesting directions for future work.Comment: 7 pages, 2 tables, 7 figure
The challenge of preparing teams for the European robotics league: Emergency
© 2017, Society for Imaging Science and Technology. ERL Emergency is an outdoor multi-domain robotic competition inspired by the 2011 Fukushima accident. The ERL Emergency Challenge requires teams of land, underwater and flying robots to work together to survey the scene, collect environmental data, and identify critical hazards. To prepare teams for this multidisciplinary task a series of summer schools and workshops have been arranged. In this paper the challenges and hands-on results of bringing students and researchers collaborating successfully in unknown environments and in new research areas are explained. As a case study results from the euRathlon/SHERPA workshop 2015 in Oulu are given
Sphericall: A Human/Artificial Intelligence interaction experience
Multi-agent systems are now wide spread in scientific works and in industrial applications. Few applications deal with the Human/Multi-agent system interaction. Multi-agent systems are characterized by individual entities, called agents, in interaction with each other and with their environment. Multi-agent systems are generally classified into complex systems categories since the global emerging phenomenon cannot be predicted even if every component is well known. The systems developed in this paper are named reactive because they behave using simple interaction models. In the reactive approach, the issue of Human/system interaction is hard to cope with and is scarcely exposed in literature. This paper presents Sphericall, an application aimed at studying Human/Complex System interactions and based on two physics inspired multi-agent systems interacting together. The Sphericall device is composed of a tactile screen and a spherical world where agents evolve. This paper presents both the technical background of Sphericall project and a feedback taken from the demonstration performed during OFFF Festival in La Villette (Paris)
Swarm Metaverse for Multi-Level Autonomy Using Digital Twins
Robot swarms are becoming popular in domains that require spatial coordination. Effective human control over swarm members is pivotal for ensuring swarm behaviours align with the dynamic needs of the system. Several techniques have been proposed for scalable human–swarm interaction. However, these techniques were mostly developed in simple simulation environments without guidance on how to scale them up to the real world. This paper addresses this research gap by proposing a metaverse for scalable control of robot swarms and an adaptive framework for different levels of autonomy. In the metaverse, the physical/real world of a swarm symbiotically blends with a virtual world formed from digital twins representing each swarm member and logical control agents. The proposed metaverse drastically decreases swarm control complexity due to human reliance on only a few virtual agents, with each agent dynamically actuating on a sub-swarm. The utility of the metaverse is demonstrated by a case study where humans controlled a swarm of uncrewed ground vehicles (UGVs) using gestural communication, and via a single virtual uncrewed aerial vehicle (UAV). The results show that humans could successfully control the swarm under two different levels of autonomy, while task performance increases as autonomy increases.</p
Control of robot swarms through natural language dialogue: A case study on monitoring fires
There are numerous environmental and non-environmental disasters happening
throughout the world, representing a big danger to common people, community
helpers, to the fauna and flora. Developing a program capable of controlling
swarms of robots, using natural language processing (NLP) and further on, a
speech to text system, will enable a more mobile solution, with no need for keyboard
and mouse or a mobile device for operating with the robots. Using a welldeveloped
NLP system will allow the program to understand natural languagebased
interactions, making this system able to be used in different contexts. In
firefighting, the use of robots, more specifically drones, enables new ways to obtain
reliable information that before was based on guesses or knowledge from someone
who had long-time experience on field. Using a swarm of robots to monitor fire
enables innumerous advantages, from the creation of a dynamic fire map, climate
information inside the fire, to finding lost firefighters on field through the generated
map. This work uses firefighting as a case-study, but other situations can be
considered, like searching someone in the sea or searching for toxins in an open
environmental area.Existem muitos desastres ambientais e não ambientais em todo o mundo, representando
um grande perigo para pessoas comuns, ajudantes da comunidade e para a
fauna e flora. O desenvolvimento de um programa capaz de controlar enxames de
robôs, usando Processamento Computacional da Língua (PCL) e, posteriormente,
um sistema de fala-para-texto, permitirá uma solução mais móvel, sem necessidade
de teclado e rato ou dispositivos móveis para operar com os robôs. O uso de um
sistema bem desenvolvido de PCL permitirá que o programa entenda interações
baseadas em linguagem natural, tornando-o capaz de ser usado em diferentes contextos.
O uso de robôs (mais especificamente drones) no combate a incêndios,
permite novas maneiras de obter informações confiáveis que antes eram baseadas
em suposições ou conhecimentos de pessoas com longa experiência em campo. O
uso de um enxame de robôs para monitorizar o incêndio permite inúmeras vantagens,
desde a criação de um mapa dinâmico do incêndio, informações climáticas
dentro do mesmo, até encontrar bombeiros perdidos no campo, através do mapa
gerado pelos robôs. Este trabalho usa o combate a incêndios como um estudo de
caso, mas outras situações podem ser consideradas, como procurar alguém no mar
ou procurar toxinas numa área ambiental aberta
Supervisory Autonomous Control of Homogeneous Teams of Unmanned Ground Vehicles, with Application to the Multi-Autonomous Ground-Robotic International Challenge
There are many different proposed methods for Supervisory Control of semi-autonomous robots. There have also been numerous software simulations to determine how many robots can be successfully supervised by a single operator, a problem known as fan-out, but only a few studies have been conducted using actual robots. As evidenced by the MAGIC 2010 competition, there is increasing interest in amplifying human capacity by allowing one or a few operators to supervise a team of robotic agents. This interest provides motivation to perform a more in-depth evaluation of many autonomous/semiautonomous robots an operator can successfully supervise. The MAGIC competition allowed two human operators to supervise a team of robots in a complex search-and mapping operation. The MAGIC competition provided the best opportunity to date to study through practice the actual fan-out with multiple semi-autonomous robots. The current research provides a step forward in determining fan-out by offering an initial framework for testing multi-robot teams under supervisory control. One conclusion of this research is that the proposed framework is not complex or complete enough to provide conclusive data for determining fan-out. Initial testing using operators with limited training suggests that there is no obvious pattern to the operator interaction time with robots based on the number of robots and the complexity of the tasks. The initial hypothesis that, for a given task and robot there exists an optimal robot-to-operator efficiency ratio, could not be confirmed. Rather, the data suggests that the ability of the operator is a dominant factor in studies involving operators with limited training supervising small teams of robots. It is possible that, with more extensive training, operator times would become more closely related to the number of agents and the complexity of the tasks. The work described in this thesis proves an experimental framework and a preliminary data set for other researchers to critique and build upon. As the demand increases for agent-to-operator ratios greater than one, the need to expand upon research in this area will continue to grow
Human-Robot Team Performance Compared to Full Robot Autonomy in 16 Real-World Search and Rescue Missions: Adaptation of the DARPA Subterranean Challenge
Human operators in human-robot teams are commonly perceived to be critical
for mission success. To explore the direct and perceived impact of operator
input on task success and team performance, 16 real-world missions (10 hrs)
were conducted based on the DARPA Subterranean Challenge. These missions were
to deploy a heterogeneous team of robots for a search task to locate and
identify artifacts such as climbing rope, drills and mannequins representing
human survivors. Two conditions were evaluated: human operators that could
control the robot team with state-of-the-art autonomy (Human-Robot Team)
compared to autonomous missions without human operator input (Robot-Autonomy).
Human-Robot Teams were often in directed autonomy mode (70% of mission time),
found more items, traversed more distance, covered more unique ground, and had
a higher time between safety-related events. Human-Robot Teams were faster at
finding the first artifact, but slower to respond to information from the robot
team. In routine conditions, scores were comparable for artifacts, distance,
and coverage. Reasons for intervention included creating waypoints to
prioritise high-yield areas, and to navigate through error-prone spaces. After
observing robot autonomy, operators reported increases in robot competency and
trust, but that robot behaviour was not always transparent and understandable,
even after high mission performance.Comment: Submitted to Transactions on Human-Robot Interactio
Advancing Robot Autonomy for Long-Horizon Tasks
Autonomous robots have real-world applications in diverse fields, such as
mobile manipulation and environmental exploration, and many such tasks benefit
from a hands-off approach in terms of human user involvement over a long task
horizon. However, the level of autonomy achievable by a deployment is limited
in part by the problem definition or task specification required by the system.
Task specifications often require technical, low-level information that is
unintuitive to describe and may result in generic solutions, burdening the user
technically both before and after task completion. In this thesis, we aim to
advance task specification abstraction toward the goal of increasing robot
autonomy in real-world scenarios. We do so by tackling problems that address
several different angles of this goal. First, we develop a way for the
automatic discovery of optimal transition points between subtasks in the
context of constrained mobile manipulation, removing the need for the human to
hand-specify these in the task specification. We further propose a way to
automatically describe constraints on robot motion by using demonstrated data
as opposed to manually-defined constraints. Then, within the context of
environmental exploration, we propose a flexible task specification framework,
requiring just a set of quantiles of interest from the user that allows the
robot to directly suggest locations in the environment for the user to study.
We next systematically study the effect of including a robot team in the task
specification and show that multirobot teams have the ability to improve
performance under certain specification conditions, including enabling
inter-robot communication. Finally, we propose methods for a communication
protocol that autonomously selects useful but limited information to share with
the other robots.Comment: PhD dissertation. 160 page