135 research outputs found

    Online Visual Robot Tracking and Identification using Deep LSTM Networks

    Full text link
    Collaborative robots working on a common task are necessary for many applications. One of the challenges for achieving collaboration in a team of robots is mutual tracking and identification. We present a novel pipeline for online visionbased detection, tracking and identification of robots with a known and identical appearance. Our method runs in realtime on the limited hardware of the observer robot. Unlike previous works addressing robot tracking and identification, we use a data-driven approach based on recurrent neural networks to learn relations between sequential inputs and outputs. We formulate the data association problem as multiple classification problems. A deep LSTM network was trained on a simulated dataset and fine-tuned on small set of real data. Experiments on two challenging datasets, one synthetic and one real, which include long-term occlusions, show promising results.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017. IROS RoboCup Best Paper Awar

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft

    Hardware-in-the-loop simulation approach for the robot at factory lite competition proposal

    Get PDF
    Mobile robotic applications are increasing in several areas not only in industries but also service robots. The Industry 4.0 promoted even more the digitalization of factories that opened space for smart-factories implementation. Robotic competitions are a key to improve research and to motivate learning. This paper addresses a new competition proposal, the Robot@Factory Lite, in the scope of the Portuguese Robotics Open. Beyond the competition, a reference robot with all its components is proposed and a simulation environment is also provided. To minimize the gap between the simulation and the real implementation, an Hardware-in-the-loop technique is proposed that allows to control the simulation with a real Arduino board. Results show the same code, and hardware, can control both simulation model and real robot.info:eu-repo/semantics/publishedVersio

    A robot localization proposal for the RobotAtFactory 4.0: A novel robotics competition within the Industry 4.0 concept

    Get PDF
    Robotic competitions are an excellent way to promote innovative solutions for the current industries’ challenges and entrepreneurial spirit, acquire technical and transversal skills through active teaching, and promote this area to the public. In other words, since robotics is a multidisciplinary field, its competitions address several knowledge topics, especially in the STEM (Science, Technology, Engineering, and Mathematics) category, that are shared among the students and researchers, driving further technology and science. A new competition encompassed in the Portuguese Robotics Open was created according to the Industry 4.0 concept in the production chain. In this competition, RobotAtFactory 4.0, a shop floor, is used to mimic a fully automated industrial logistics warehouse and the challenges it brings. Autonomous Mobile Robots (AMRs) must be used to operate without supervision and perform the tasks that the warehouse requests. There are different types of boxes which dictate their partial and definitive destinations. In this reasoning, AMRs should identify each and transport them to their destinations. This paper describes an approach to the indoor localization system for the competition based on the Extended Kalman Filter (EKF) and ArUco markers. Different innovation methods for the obtained observations were tested and compared in the EKF. A real robot was designed and assembled to act as a test bed for the localization system’s validation. Thus, the approach was validated in the real scenario using a factory floor with the official specifications provided by the competition organization.The authors are grateful to the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/ 05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/ 2021). The project that gave rise to these results received the support of a fellowship from “la Caixa” Foundation (ID 100010434). The fellowship code is LCF/BQ/DI20/11780028. The authors also acknowledge the R&D Unit SYSTEC-Base (UIDB/00147/2020), Programmatic (UIDP/00147/2020) and Project Warehouse of the Future (WoF), with reference POCI-01-0247-FEDER-072638, co-funded by FEDER, through COMPETE 2020info:eu-repo/semantics/publishedVersio

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions

    Get PDF
    Welcome to ROBOTICA 2009. This is the 9th edition of the conference on Autonomous Robot Systems and Competitions, the third time with IEEE‐Robotics and Automation Society Technical Co‐Sponsorship. Previous editions were held since 2001 in Guimarães, Aveiro, Porto, Lisboa, Coimbra and Algarve. ROBOTICA 2009 is held on the 7th May, 2009, in Castelo Branco , Portugal. ROBOTICA has received 32 paper submissions, from 10 countries, in South America, Asia and Europe. To evaluate each submission, three reviews by paper were performed by the international program committee. 23 papers were published in the proceedings and presented at the conference. Of these, 14 papers were selected for oral presentation and 9 papers were selected for poster presentation. The global acceptance ratio was 72%. After the conference, eighth papers will be published in the Portuguese journal Robótica, and the best student paper will be published in IEEE Multidisciplinary Engineering Education Magazine. Three prizes will be awarded in the conference for: the best conference paper, the best student paper and the best presentation. The last two, sponsored by the IEEE Education Society ‐ Student Activities Committee. We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this conference. Next, to all the members of the international program committee and reviewers, who helped us with their expertise and valuable time. We would also like to deeply thank the invited speaker, Jean Paul Laumond, LAAS‐CNRS France, for their excellent contribution in the field of humanoid robots. Finally, a word of appreciation for the hard work of the secretariat and volunteers. Our deep gratitude goes to the Scientific Organisations that kindly agreed to sponsor the Conference, and made it come true. We look forward to seeing more results of R&D work on Robotics at ROBOTICA 2010, somewhere in Portugal
    corecore