1,400 research outputs found
Automatic Waypoint generation to improve robot navigation through narrow spaces
In domestic robotics, passing through narrow areas becomes critical for safe and effective robot navigation. Due to factors like sensor noise or miscalibration, even if the free space is sufficient for the robot to pass through, it may not see enough clearance to navigate, hence limiting its operational space. An approach to facing this is to insert waypoints strategically placed within the problematic areas in the map, which are considered by the robot planner when generating a trajectory and help to successfully traverse them. This is typically carried out by a human operator either by relying on their experience or by trial-and-error. In this paper, we present an automatic procedure to perform this task that: (i) detects problematic areas in the map and (ii) generates a set of auxiliary navigation waypoints from which more suitable trajectories can be generated by the robot planner. Our proposal, fully compatible with the robotic operating system (ROS), has been successfully applied to robots deployed in different houses within the H2020 MoveCare project. Moreover, we have performed extensive simulations with four state-of-the-art robots operating within real maps. The results reveal significant improvements in the number of successful navigations for the evaluated scenarios, demonstrating its efficacy in realistic situations.Ministerio de Economía y Competitividad (DPI2017-84827-R) con Fondos FEDER. Comisión Europea (ICT-26-2016b-GA-732158). Universidad de Málaga
Optimizations on semantic environment management: an application for humanoid robot home assistance
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This article introduces some optimization mechanisms
focused on environment management, object recognition,
and environment interaction. Although the generality of the
presented system, this work will be focused on its application on
home assistance humanoid robots. For this purpose, a generic
environment formalization procedure for semantic scenery
description is introduced. As the main contribution of this work,
some techniques for a more efficient use of the environment
knowledge are proposed. That way, the application of an areabased
discrimination mechanism will avoid to process large
amounts of data, useless in the current context, improving the
object recognition, and characterizing the available interactions
in the current area. Finally, the formalized description, and the
optimization procedure, will be tested and verified on a specific
home scenario using a humanoid robotThis work has been supported by the Spanish Science and Innovation Ministry MICINN under the CICYT project COBAMI: DPI2011-28507-C02-01/02. The responsibility for the content remains with the authors.Munera Sánchez, E.; Posadas-Yagüe, J.; Poza-Lujan, J.; Blanes Noguera, F.; Simó Ten, JE. (2014). Optimizations on semantic environment management: an application for humanoid robot home assistance. En 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE. 720-725. doi:10.1109/HUMANOIDS.2014.7041442S72072
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Model-Based Environmental Visual Perception for Humanoid Robots
The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
Control strategies for cleaning robots in domestic applications: A comprehensive review:
Service robots are built and developed for various applications to support humans as companion, caretaker, or domestic support. As the number of elderly people grows, service robots will be in increasing demand. Particularly, one of the main tasks performed by elderly people, and others, is the complex task of cleaning. Therefore, cleaning tasks, such as sweeping floors, washing dishes, and wiping windows, have been developed for the domestic environment using service robots or robot manipulators with several control approaches. This article is primarily focused on control methodology used for cleaning tasks. Specifically, this work mainly discusses classical control and learning-based controlled methods. The classical control approaches, which consist of position control, force control, and impedance control , are commonly used for cleaning purposes in a highly controlled environment. However, classical control methods cannot be generalized for cluttered environment so that learning-based control methods could be an alternative solution. Learning-based control methods for cleaning tasks can encompass three approaches: learning from demonstration (LfD), supervised learning (SL), and reinforcement learning (RL). These control approaches have their own capabilities to generalize the cleaning tasks in the new environment. For example, LfD, which many research groups have used for cleaning tasks, can generate complex cleaning trajectories based on human demonstration. Also, SL can support the prediction of dirt areas and cleaning motion using large number of data set. Finally, RL can learn cleaning actions and interact with the new environment by the robot itself. In this context, this article aims to provide a general overview of robotic cleaning tasks based on different types of control methods using manipulator. It also suggest a description of the future directions of cleaning tasks based on the evaluation of the control approaches
History of the Institut de Robòtica i Informàtica Industrial
The Institut de Robòtica i Informàtica Industrial is a Joint University Research Institute participated by the Spanish National Research Council and the Universitat Politècnica de Catalunya. Founded in 1995, its scientists have addressed over the years many research topics spanning from robot kinematics, to computer graphics, automatic control, energy systems, and human-robot interaction, among others. This book, prepared for its 25th anniversary, covers its evolution over the years, and serves as a mean of appreciation to the many students, administrative personnel, research engineers, or scientists that have formed part of it.Postprint (published version
Reasoning about space for human-robot interaction
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain".
Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes.
Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux:
- La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction.
- Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives.Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective".
In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings.
In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks:
- Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction.
- A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
- …