183 research outputs found

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    A dynamic neural field architecture for a pro-active assistant robot

    Get PDF
    We present a control architecture for non-verbal HRI that allows an assistant robot to have a pro-active and anticipatory behavior. The architecture implements the coordination of actions and goals among the human, that needs help, and the robot as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of the human motor behavior. The robot control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. Different subpopulations encode task relevant information about action means, action goals and context in form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic control architecture is validated in an assistive task in which an anthropomorphic robot acts as a personal assistant of a person with motor impairments. We show that the context dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing situations. This includes adaptation to different users and mutual compensation of physical limitations.Fundação para a Ciência e a Tecnologia (FCT) - POCI/V.5/A0119/2005fp6-IST2 EU-project JAST (proj.nr. 003747

    Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches

    Full text link
    Robotic vision is a field where continual learning can play a significant role. An embodied agent operating in a complex environment subject to frequent and unpredictable changes is required to learn and adapt continuously. In the context of object recognition, for example, a robot should be able to learn (without forgetting) objects of never before seen classes as well as improving its recognition capabilities as new instances of already known classes are discovered. Ideally, continual learning should be triggered by the availability of short videos of single objects and performed on-line on on-board hardware with fine-grained updates. In this paper, we introduce a novel continual learning protocol based on the CORe50 benchmark and propose two rehearsal-free continual learning techniques, CWR* and AR1*, that can learn effectively even in the challenging case of nearly 400 small non-i.i.d. incremental batches. In particular, our experiments show that AR1* can outperform other state-of-the-art rehearsal-free techniques by more than 15% accuracy in some cases, with a very light and constant computational and memory overhead across training batches.Comment: Accepted in the CLVision Workshop at CVPR2020: 12 pages, 7 figures, 5 tables, 3 algorithm

    Probabilistic Goal-Directed Pedestrian Prediction by Means of Artificial Neural Networks

    Get PDF
    corecore