1,239 research outputs found
Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited Environments
This paper proposes a bandwidth tunable technique for real-time probabilistic
scene modeling and mapping to enable co-robotic exploration in communication
constrained environments such as the deep sea. The parameters of the system
enable the user to characterize the scene complexity represented by the map,
which in turn determines the bandwidth requirements. The approach is
demonstrated using an underwater robot that learns an unsupervised scene model
of the environment and then uses this scene model to communicate the spatial
distribution of various high-level semantic scene constructs to a human
operator. Preliminary experiments in an artificially constructed tank
environment as well as simulated missions over a 10m10m coral reef
using real data show the tunability of the maps to different bandwidth
constraints and science interests. To our knowledge this is the first paper to
quantify how the free parameters of the unsupervised scene model impact both
the scientific utility of and bandwidth required to communicate the resulting
scene model.Comment: 8 pages, 6 figures, accepted for presentation in IEEE Int. Conf. on
Robotics and Automation, ICRA '19, Montreal, Canada, May 201
The Development of LLMs for Embodied Navigation
In recent years, the rapid advancement of Large Language Models (LLMs) such
as the Generative Pre-trained Transformer (GPT) has attracted increasing
attention due to their potential in a variety of practical applications. The
application of LLMs with Embodied Intelligence has emerged as a significant
area of focus. Among the myriad applications of LLMs, navigation tasks are
particularly noteworthy because they demand a deep understanding of the
environment and quick, accurate decision-making. LLMs can augment embodied
intelligence systems with sophisticated environmental perception and
decision-making support, leveraging their robust language and image-processing
capabilities. This article offers an exhaustive summary of the symbiosis
between LLMs and embodied intelligence with a focus on navigation. It reviews
state-of-the-art models, research methodologies, and assesses the advantages
and disadvantages of existing embodied navigation models and datasets. Finally,
the article elucidates the role of LLMs in embodied intelligence, based on
current research, and forecasts future directions in the field. A comprehensive
list of studies in this survey is available at
https://github.com/Rongtao-Xu/Awesome-LLM-E
An adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment
- …