1,573 research outputs found
Voronoi-based space partitioning for coordinated multi-robot exploration
Recent multi-robot exploration algorithms usually rely on occupancy grids as their core world representation. However, those grids are not appropriate for environments that are very large or whose boundaries are not well delimited from the beginning of the exploration. In contrast, polygonal representations do not have such limitations. Previously, the authors have proposed a new exploration algorithm based on partitioning unknown space into as many regions as available robots by applying K-Means clustering to an occupancy grid representation, and have shown that this approach leads to higher robot dispersion than other approaches, which is potentially beneficial for quick coverage of wide areas. In this paper, the original K-Means clustering applied over grid cells, which is the most expensive stage of the aforementioned exploration algorithm, is substituted for a Voronoi-based partitioning algorithm applied to polygons. The computational cost of the exploration algorithm is thus significantly reduced for large maps. An empirical evaluation and comparison of both partitioning approaches is presented.This work is partially supported by the Government of Spain under MCYT DPI2004-07993-C03-03. Ling Wu is supported by a FPI scholarship from the Spanish Ministry of Education and Science
Multi-Robot Multi-Room Exploration with Geometric Cue Extraction and Spherical Decomposition
This work proposes an autonomous multi-robot exploration pipeline that
coordinates the behaviors of robots in an indoor environment composed of
multiple rooms. Contrary to simple frontier-based exploration approaches, we
aim to enable robots to methodically explore and observe an unknown set of
rooms in a structured building, keeping track of which rooms are already
explored and sharing this information among robots to coordinate their
behaviors in a distributed manner. To this end, we propose (1) a geometric cue
extraction method that processes 3D map point cloud data and detects the
locations of potential cues such as doors and rooms, (2) a spherical
decomposition for open spaces used for target assignment. Using these two
components, our pipeline effectively assigns tasks among robots, and enables a
methodical exploration of rooms. We evaluate the performance of our pipeline
using a team of up to 3 aerial robots, and show that our method outperforms the
baseline by 36.6% in simulation and 26.4% in real-world experiments
Recommended from our members
Soft Morphological Computation
Soft Robotics is a relatively new area of research, where progress in material science has powered the next generation of robots, exhibiting biological-like properties such as soft/elastic tissues, compliance, resilience and more besides. One of the issues when employing soft robotics technologies is the soft nature of the interactions arising between the robot and its environment. These interactions are complex, and the their dynamics are non-linear and hard to capture with known models. In this thesis we argue that complex soft interactions
can actually be beneficial to the robot, and give rise to rich stimuli which can be used for the resolution of robot tasks. We further argue that the usefulness of these interactions depends on statistical regularities, or structure, that appear in the stimuli. To this end, robots should appropriately employ their morphology and their actions, to influence the system-environment interactions such that structure can arise in the stimuli. In this thesis we show that learning processes can be used to perform such a task. Following this rationale, this thesis proposes and supports the theory of Soft Morphological Computation (SoMComp), by which a soft robot should appropriately condition, or âaffectâ, the soft interactions to improve the quality of the physical stimuli arising from it. SoMComp is composed of four main principles, i.e.: Soft Proprioception, Soft Sensing, Soft Morphology and Soft Actuation. Each of these principles is explored in the context of haptic object recognition or object handling in soft robots. Finally, this thesis provides an overview of this research and its future directions.AHDB CP17
Constrained Collective Movement in Human-Robot Teams
This research focuses on improving human-robot co-navigation for teams of robots and humans navigating together as a unit while accomplishing a desired task. Frequently, the teamâs co-navigation is strongly influenced by a predefined Standard Operating Procedure (SOP), which acts as a high-level guide for where agents should go and what they should do. In this work, I introduce the concept of Constrained Collective Movement (CCM) of a team to describe how members of the team perform inter-team and intra-team navigation to execute a joint task while balancing environmental and application-specific constraints. This work advances robotsâ abilities to participate along side humans in applications such as urban search and rescue, firefighters searching for people in a burning building, and military teams performing a building clearing operation. Incorporating robots on such teams could reduce the number of human lives put in danger while increasing the teamâs ability to conduct beneficial tasks such as carrying life saving equipment to stranded people.
Most previous work on generating more complex collaborative navigation for human- robot teams focuses solely on using model-based methods. These methods usually suffer from the need for hard coding the rules to follow, which can require much time and domain knowledge and can lead to unnatural behavior.
This dissertation investigates merging high-level model-based knowledge representation with low-level behavior cloning to achieve CCM of a human-robot team performing collaborative co-navigation. To evaluate the approach, experiments are performed in simulation with the detail-rich game design engine Unity. Experiments show that the designed approach can learn elements of high-level behaviors with accuracies up to 88%. Additionally, the approach is shown to learn low-level robot control behaviors with accuracies up to 89%.
To the best of my knowledge, this is the first attempt to blend classical AI methods with state-of-the-art machine learning methods for human-robot team collaborative co-navigation. This not only allows for better human-robot team co-navigation, but also has implications for improving other teamwork based human-robot applications such as joint manufacturing and social assistive robotics
Coordinated Robot Navigation via Hierarchical Clustering
We introduce the use of hierarchical clustering for relaxed, deterministic
coordination and control of multiple robots. Traditionally an unsupervised
learning method, hierarchical clustering offers a formalism for identifying and
representing spatially cohesive and segregated robot groups at different
resolutions by relating the continuous space of configurations to the
combinatorial space of trees. We formalize and exploit this relation,
developing computationally effective reactive algorithms for navigating through
the combinatorial space in concert with geometric realizations for a particular
choice of hierarchical clustering method. These constructions yield
computationally effective vector field planners for both hierarchically
invariant as well as transitional navigation in the configuration space. We
apply these methods to the centralized coordination and control of
perfectly sensed and actuated Euclidean spheres in a -dimensional ambient
space (for arbitrary and ). Given a desired configuration supporting a
desired hierarchy, we construct a hybrid controller which is quadratic in
and algebraic in and prove that its execution brings all but a measure zero
set of initial configurations to the desired goal with the guarantee of no
collisions along the way.Comment: 29 pages, 13 figures, 8 tables, extended version of a paper in
preparation for submission to a journa
Towards a Probabilistic Roadmap for Multi-robot Coordination
International audienceIn this paper, we discuss the problem of multi-robot coordination and propose an approach for coordinated multi-robot motion planning by using a probabilistic roadmap (PRM) based on adaptive cross sampling (ACS). The proposed approach, called ACS-PRM, is a sampling-based method and consists of three steps including C-space sampling, roadmap building and motion planning. In contrast to previous approaches, our approach is designed to plan separate kinematic paths for multiple robots to minimize the problem of congestion and collision in an effective way so as to improve the system efficiency. Our approach has been implemented and evaluated in simulation. The experimental results demonstrate the total planning time can be obviously reduced by our ACS-PRM approach compared with previous approaches
- âŠ