6,480 research outputs found
Spatial context-aware person-following for a domestic robot
Domestic robots are in the focus of research in
terms of service providers in households and even as robotic
companion that share the living space with humans. A major
capability of mobile domestic robots that is joint exploration
of space. One challenge to deal with this task is how could we
let the robots move in space in reasonable, socially acceptable
ways so that it will support interaction and communication
as a part of the joint exploration. As a step towards this
challenge, we have developed a context-aware following behav-
ior considering these social aspects and applied these together
with a multi-modal person-tracking method to switch between
three basic following approaches, namely direction-following,
path-following and parallel-following. These are derived from
the observation of human-human following schemes and are
activated depending on the current spatial context (e.g. free
space) and the relative position of the interacting human.
A combination of the elementary behaviors is performed in
real time with our mobile robot in different environments.
First experimental results are provided to demonstrate the
practicability of the proposed approach
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)
This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
A Survey on Human-aware Robot Navigation
Intelligent systems are increasingly part of our everyday lives and have been
integrated seamlessly to the point where it is difficult to imagine a world
without them. Physical manifestations of those systems on the other hand, in
the form of embodied agents or robots, have so far been used only for specific
applications and are often limited to functional roles (e.g. in the industry,
entertainment and military fields). Given the current growth and innovation in
the research communities concerned with the topics of robot navigation,
human-robot-interaction and human activity recognition, it seems like this
might soon change. Robots are increasingly easy to obtain and use and the
acceptance of them in general is growing. However, the design of a socially
compliant robot that can function as a companion needs to take various areas of
research into account. This paper is concerned with the navigation aspect of a
socially-compliant robot and provides a survey of existing solutions for the
relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs
Mobile robots often rely on pre-existing maps for effective path planning and
navigation. However, when these maps are unavailable, particularly in
unfamiliar environments, a different approach become essential. This paper
introduces DynaCon, a novel system designed to provide mobile robots with
contextual awareness and dynamic adaptability during navigation, eliminating
the reliance of traditional maps. DynaCon integrates real-time feedback with an
object server, prompt engineering, and navigation modules. By harnessing the
capabilities of Large Language Models (LLMs), DynaCon not only understands
patterns within given numeric series but also excels at categorizing objects
into matched spaces. This facilitates dynamic path planner imbued with
contextual awareness. We validated the effectiveness of DynaCon through an
experiment where a robot successfully navigated to its goal using reasoning.
Source code and experiment videos for this work can be found at:
https://sites.google.com/view/dynacon.Comment: Submitted to ICRA 202
- âŠ