4,097 research outputs found

    COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments

    Get PDF
    Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN

    Vision-based deep execution monitoring

    Full text link
    Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning

    A generative traversability model for monocular robot self-guidance

    Get PDF
    The research work disclosed in this publication is partially funded by the Strategic Educational Pathways Scholarship (Malta). The scholarship is part-financed by the European Union - European Social Fund (ESF) under the Operational Programme II - Cohesion Policy 2007-2013, Empowering People for More Jobs and a Better Quality of Life.In order for robots to be integrated into human active spaces and perform useful tasks, they must be capable of discriminating between traversable surfaces and obstacle regions in their surrounding environment. In this work, a principled semi-supervised (EM) framework is presented for the detection of traversable image regions for use on a low-cost monocular mobile robot. We propose a novel generative model for the occurrence of traversability cues, which are a measure of dissimilarity between safe-window and image superpixel features. Our classification results on both indoor and outdoor images sequences demonstrate its generality and adaptability to multiple environments through the online learning of an exponential mixture model. We show that this appearance-based vision framework is robust and can quickly and accurately estimate the probabilistic traversability of an image using no temporal information. Moreover, the reduction in safe-window size as compared to the state-of-the-art enables a self-guided monocular robot to roam in closer proximity of obstacles.peer-reviewe

    Maximum likelihood estimation-assisted ASVSF through state covariance-based 2D SLAM algorithm

    Get PDF
    The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurementto the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC)

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft
    corecore