17 research outputs found

    Genetic Algorithm-based Robot Path Planning

    Get PDF
    Nowadays, building an intelligent robot that able to move by itself from one location to another without collides with other obstacles is of interest in many applications. In the real world, condition of an environment is always unpredictable and changes with the existence of dynamic obstacles. This paper tends to propose an algorithm for robot path planning in a dynamic environment using Genetic algorithm (GA) technique. The proposed algorithm is able to find an optimum path for a robot and avoid any static and dynamic obstacles. The variation of the proposed algorithm is shown with the implementation of the algorithm in 4-way movement robot and 8-way movement robot. The simulation results show significant performance of the algorithm when compared with real optimum path

    Knowledge Representation for Robots through Human-Robot Interaction

    Full text link
    The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing "knowledgeable" robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP 201

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Building a grid-semantic map for the navigation of service robots through human–robot interaction

    Get PDF
    AbstractThis paper presents an interactive approach to the construction of a grid-semantic map for the navigation of service robots in an indoor environment. It is based on the Robot Operating System (ROS) framework and contains four modules, namely Interactive Module, Control Module, Navigation Module and Mapping Module. Three challenging issues have been focused during its development: (i) how human voice and robot visual information could be effectively deployed in the mapping and navigation process; (ii) how semantic names could combine with coordinate data in an online Grid-Semantic map; and (iii) how a localization–evaluate–relocalization method could be used in global localization based on modified maximum particle weight of the particle swarm. A number of experiments are carried out in both simulated and real environments such as corridors and offices to verify its feasibility and performance

    Recent Developments in Monocular SLAM within the HRI Framework

    Get PDF
    This chapter describes an approach to improve the feature initialization process in the delayed inverse-depth feature initialization monocular Simultaneous Localisation and Mapping (SLAM), using data provided by a robot’s camera plus an additional monocular sensor deployed in the headwear of the human component in a human-robot collaborative exploratory team. The robot and the human deploy a set of sensors that once combined provides the data required to localize the secondary camera worn by the human. The approach and its implementation are described along with experimental results demonstrating its performance. A discussion on the usual sensors within the robotics field, especially in SLAM, provides background to the advantages and capabilities of the system implemented in this research

    SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI).</p> <p>Methods</p> <p>In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents.</p> <p>Results</p> <p>The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface.</p> <p>Conclusions</p> <p>The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.</p

    LU4R: Adaptive Spoken Language Understanding for Robots

    Get PDF
    Service robots are expected to operate in specific environments, where the presence of humans plays a key role. It is thus essential to enable for a natural and effective communication among humans and robots. One of the main features of such robotics platforms is the ability to react to spoken commands. This requires a comprehensive understanding of the user utterance to trigger the robot reaction. Moreover, the correct interpretation of linguistic interactions depends on physical, cognitive and language-dependent aspects related to the environment. In this work, we present the latest version of LU4R - adaptive spoken Language Understanding 4 Robots, a Spoken Language Understanding framework for the semantic interpretation of robotic commands, that is sensitive to the operational environment. The overall system is designed according to a Client/Server architecture in order to be easily deployed in a vast plethora of robotic platforms. Moreover, an improved version of HuRIC - Human-Robot Interaction Corpus is presented. The main novelty presented in this paper is the extension to commands expressed in Italian. In order to prove the effectiveness of such system, we also present some empirical results in both English and Italian computed over the new HuRIC resource

    Lifelong topological visual navigation

    Full text link
    La possibilité pour un robot de naviguer en utilisant uniquement la vision est attrayante en raison de sa simplicité. Les approches de navigation traditionnelles basées sur la vision nécessitent une étape préalable de construction de carte qui est ardue et sujette à l'échec, ou ne peuvent que suivre exactement des trajectoires précédemment exécutées. Les nouvelles techniques de navigation visuelle basées sur l'apprentissage réduisent la dépendance à l'égard d'une carte et apprennent plutôt directement des politiques de navigation à partir des images. Il existe actuellement deux paradigmes dominants : les approches de bout en bout qui renoncent entièrement à la représentation explicite de la carte, et les approches topologiques qui préservent toujours une certaine connectivité de l'espace. Cependant, alors que les méthodes de bout en bout ont tendance à éprouver des difficultés dans les tâches de navigation sur de longues distances, les solutions basées sur les cartes topologiques sont sujettes à des défaillances dues à des arêtes erronées dans le graphe. Dans ce document, nous proposons une méthode de navigation visuelle topologique basée sur l'apprentissage, avec des stratégies de mise à jour du graphe, qui améliore les performances de navigation sur toute la durée de vie du robot. Nous nous inspirons des algorithmes de planification basés sur l'échantillonnage pour construire des graphes topologiques basés sur l'image, ce qui permet d'obtenir des graphes plus épars et d'améliorer les performances de navigation par rapport aux méthodes de base. En outre, contrairement aux contrôleurs qui apprennent à partir d'environnements d'entraînement fixes, nous montrons que notre modèle peut être affiné à l'aide d'un ensemble de données relativement petit provenant de l'environnement réel où le robot est déployé. Enfin, nous démontrons la forte performance du système dans des expériences de navigation de robots dans le monde réel.The ability for a robot to navigate using vision only is appealing due to its simplicity. Traditional vision-based navigation approaches require a prior map-building step that was arduous and prone to failure, or could only exactly follow previously executed trajectories. Newer learning-based visual navigation techniques reduce the reliance on a map and instead directly learn policies from image inputs for navigation. There are currently two prevalent paradigms: end-to-end approaches forego the explicit map representation entirely, and topological approaches which still preserve some loose connectivity of the space. However, while end-to-end methods tend to struggle in long-distance navigation tasks, topological map-based solutions are prone to failure due to spurious edges in the graph. In this work, we propose a learning-based topological visual navigation method with graph update strategies that improves lifelong navigation performance over time. We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs with higher navigation performance compared to baseline methods. Also, unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed. Finally, we demonstrate strong system performance in real world robot navigation experiments
    corecore