10,681 research outputs found
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots
The increasing number of robots in home environments leads to an emerging
coexistence between humans and robots. Robots undertake common tasks and
support the residents in their everyday life. People appreciate the presence of
robots in their environment as long as they keep the control over them. One
important aspect is the control of a robot's workspace. Therefore, we introduce
virtual borders to precisely and flexibly define the workspace of mobile
robots. First, we propose a novel framework that allows a person to
interactively restrict a mobile robot's workspace. To show the validity of this
framework, a concrete implementation based on visual markers is implemented.
Afterwards, the mobile robot is capable of performing its tasks while
respecting the new virtual borders. The approach is accurate, flexible and less
time consuming than explicit robot programming. Hence, even non-experts are
able to teach virtual borders to their robots which is especially interesting
in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure
An adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy
With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment
Robot Navigation in Unseen Spaces using an Abstract Map
Human navigation in built environments depends on symbolic spatial
information which has unrealised potential to enhance robot navigation
capabilities. Information sources such as labels, signs, maps, planners, spoken
directions, and navigational gestures communicate a wealth of spatial
information to the navigators of built environments; a wealth of information
that robots typically ignore. We present a robot navigation system that uses
the same symbolic spatial information employed by humans to purposefully
navigate in unseen built environments with a level of performance comparable to
humans. The navigation system uses a novel data structure called the abstract
map to imagine malleable spatial models for unseen spaces from spatial symbols.
Sensorimotor perceptions from a robot are then employed to provide purposeful
navigation to symbolic goal locations in the unseen environment. We show how a
dynamic system can be used to create malleable spatial models for the abstract
map, and provide an open source implementation to encourage future work in the
area of symbolic navigation. Symbolic navigation performance of humans and a
robot is evaluated in a real-world built environment. The paper concludes with
a qualitative analysis of human navigation strategies, providing further
insights into how the symbolic navigation capabilities of robots in unseen
built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and
Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see
https://btalb.github.io/abstract_map/ for access to softwar
- …