27,705 research outputs found
A Hierarchical Architecture for Flexible Human-Robot Collaboration
This thesis is devoted to design a software architecture for Human-
Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working
alongside humans. We propose FlexHRC, a hierarchical and
flexible human-robot cooperation architecture specifically designed
to provide collaborative robots with an extended degree of autonomy
when supporting human operators in tasks with high-variability.
Along with FlexHRC, we have introduced novel techniques appropriate
for three interleaved levels, namely perception, representation,
and action, each one aimed at addressing specific traits of humanrobot
cooperation tasks.
The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative
robots could bring to the whole production process. In this
context, a yet unreached enabling technology is the design of robots
able to deal at all levels with humans\u2019 intrinsic variability, which is
not only a necessary element to a comfortable working experience
for humans but also a precious capability for efficiently dealing with
unexpected events. Moreover, a flexible assembly of semi-finished
products is one of the expected features of next-generation shop-floor
lines. Currently, such flexibility is placed on the shoulders of human
operators, who are responsible for product variability, and therefore
they are subject to potentially high stress levels and cognitive load
when dealing with complex operations. At the same time, operations
in the shop-floor are still very structured and well-defined. Collaborative
robots have been designed to allow for a transition of such burden
from human operators to robots that are flexible enough to support
them in high-variability tasks while they unfold.
As mentioned before, FlexHRC architecture encompasses three perception,
action, and representation levels. The perception level relies
on wearable sensors for human action recognition and point cloud
data for perceiving the object in the scene. The action level embraces
four components, the robot execution manager for decoupling
action planning from robot motion planning and mapping the symbolic
actions to the robot controller command interface, a task Priority
framework to control the robot, a differential equation solver to
simulate and evaluate the robot behaviour on-the-fly, and finally a
random-based method for the robot path planning. The representation
level depends on AND/OR graphs for the representation of and
the reasoning upon human-robot cooperation models online, a task
manager to plan, adapt, and make decision for the robot behaviors,
and a knowledge base in order to store the cooperation and workspace
information.
We evaluated the FlexHRC functionalities according to the application
desired objectives. This evaluation is accompanied with several
experiments, namely collaborative screwing task, coordinated transportation
of the objects in cluttered environment, collaborative table
assembly task, and object positioning tasks.
The main contributions of this work are: (i) design and implementation
of FlexHRC which enables the functional requirements necessary
for the shop-floor assembly application such as task and team
level flexibility, scalability, adaptability, and safety just a few to name,
(ii) development of the task representation, which integrates a hierarchical
AND/OR graph whose online behaviour is formally specified
using First Order Logic, (iii) an in-the-loop simulation-based decision
making process for the operations of collaborative robots coping with
the variability of human operator actions, (iv) the robot adaptation to
the human on-the-fly decisions and actions via human action recognition,
and (v) the predictable robot behavior to the human user thanks
to the task priority based control frame, the introduced path planner,
and the natural and intuitive communication of the robot with the
human
Home alone: autonomous extension and correction of spatial representations
In this paper we present an account
of the problems faced by a mobile robot given
an incomplete tour of an unknown environment,
and introduce a collection of techniques which can
generate successful behaviour even in the presence
of such problems. Underlying our approach is the
principle that an autonomous system must be motivated
to act to gather new knowledge, and to validate
and correct existing knowledge. This principle is
embodied in Dora, a mobile robot which features
the aforementioned techniques: shared representations,
non-monotonic reasoning, and goal generation
and management. To demonstrate how well this
collection of techniques work in real-world situations
we present a comprehensive analysis of the Dora
system’s performance over multiple tours in an indoor
environment. In this analysis Dora successfully
completed 18 of 21 attempted runs, with all but
3 of these successes requiring one or more of the
integrated techniques to recover from problems
Robotic ubiquitous cognitive ecology for smart homes
Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
Autonomous 3D Exploration of Large Structures Using an UAV Equipped with a 2D LIDAR
This paper addressed the challenge of exploring large, unknown, and unstructured
industrial environments with an unmanned aerial vehicle (UAV). The resulting system combined
well-known components and techniques with a new manoeuvre to use a low-cost 2D laser to measure
a 3D structure. Our approach combined frontier-based exploration, the Lazy Theta* path planner, and
a flyby sampling manoeuvre to create a 3D map of large scenarios. One of the novelties of our system
is that all the algorithms relied on the multi-resolution of the octomap for the world representation.
We used a Hardware-in-the-Loop (HitL) simulation environment to collect accurate measurements
of the capability of the open-source system to run online and on-board the UAV in real-time. Our
approach is compared to different reference heuristics under this simulation environment showing
better performance in regards to the amount of explored space. With the proposed approach, the UAV
is able to explore 93% of the search space under 30 min, generating a path without repetition that
adjusts to the occupied space covering indoor locations, irregular structures, and suspended obstaclesUnión Europea Marie Sklodowska-Curie 64215Unión Europea MULTIDRONE (H2020-ICT-731667)Uniión Europea HYFLIERS (H2020-ICT-779411
Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups
A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper
- …