934 research outputs found
Globally Guided Trajectory Planning in Dynamic Environments
Navigating mobile robots through environments shared with humans is
challenging. From the perspective of the robot, humans are dynamic obstacles
that must be avoided. These obstacles make the collision-free space nonconvex,
which leads to two distinct passing behaviors per obstacle (passing left or
right). For local planners, such as receding-horizon trajectory optimization,
each behavior presents a local optimum in which the planner can get stuck. This
may result in slow or unsafe motion even when a better plan exists. In this
work, we identify trajectories for multiple locally optimal driving behaviors,
by considering their topology. This identification is made consistent over
successive iterations by propagating the topology information. The most
suitable high-level trajectory guides a local optimization-based planner,
resulting in fast and safe motion plans. We validate the proposed planner on a
mobile robot in simulation and real-world experiments.Comment: 7 pages, 6 figures, accepted to IEEE International Conference on
Robotics and Automation (ICRA) 202
CTopPRM: Clustering Topological PRM for Planning Multiple Distinct Paths in 3D Environments
In this paper, we propose a new method called Clustering Topological PRM
(CTopPRM) for finding multiple homotopically distinct paths in 3D cluttered
environments. Finding such distinct paths, e.g., going around an obstacle from
a different side, is useful in many applications. Among others, using multiple
distinct paths is necessary for optimization-based trajectory planners where
found trajectories are restricted to only a single homotopy class of a given
path. Distinct paths can also be used to guide sampling-based motion planning
and thus increase the effectiveness of planning in environments with narrow
passages. Graph-based representation called roadmap is a common representation
for path planning and also for finding multiple distinct paths. However,
challenging environments with multiple narrow passages require a densely
sampled roadmap to capture the connectivity of the environment. Searching such
a dense roadmap for multiple paths is computationally too expensive. Therefore,
the majority of existing methods construct only a sparse roadmap which,
however, struggles to find all distinct paths in challenging environments. To
this end, we propose the CTopPRM which creates a sparse graph by clustering an
initially sampled dense roadmap. Such a reduced roadmap allows fast
identification of homotopically distinct paths captured in the dense roadmap.
We show, that compared to the existing methods the CTopPRM improves the
probability of finding all distinct paths by almost 20% in tested environments,
during same run-time. The source code of our method is released as an
open-source package.Comment: in IEEE Robotics and Automation Letter
Real-Time Navigation for Bipedal Robots in Dynamic Environments
The popularity of mobile robots has been steadily growing, with these robots
being increasingly utilized to execute tasks previously completed by human
workers. For bipedal robots to see this same success, robust autonomous
navigation systems need to be developed that can execute in real-time and
respond to dynamic environments. These systems can be divided into three
stages: perception, planning, and control. A holistic navigation framework for
bipedal robots must successfully integrate all three components of the
autonomous navigation problem to enable robust real-world navigation. In this
paper, we present a real-time navigation framework for bipedal robots in
dynamic environments. The proposed system addresses all components of the
navigation problem: We introduce a depth-based perception system for obstacle
detection, mapping, and localization. A two-stage planner is developed to
generate collision-free trajectories robust to unknown and dynamic
environments. And execute trajectories on the Digit bipedal robot's walking
gait controller. The navigation framework is validated through a series of
simulation and hardware experiments that contain unknown environments and
dynamic obstacles.Comment: Submitted to 2023 IEEE International Conference on Robotics and
Automation (ICRA). For associated experiment recordings see
https://www.youtube.com/watch?v=WzHejHx-Kz
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
- …