417 research outputs found

    Few-Shot Visual Grounding for Natural Human-Robot Interaction

    Get PDF
    Natural Human-Robot Interaction (HRI) is one of the key components for service robots to be able to work in human-centric environments. In such dynamic environments, the robot needs to understand the intention of the user to accomplish a task successfully. Towards addressing this point, we propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user. At the core of our system, we employ a multi-modal deep neural network for visual grounding. Unlike most grounding methods that tackle the challenge using pre-trained object detectors via a two-stepped process, we develop a single stage zero-shot model that is able to provide predictions in unseen data. We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets. Experimental results showed that the proposed model performs well in terms of accuracy and speed, while showcasing robustness to variation in the natural language input.Comment: 6 pages, 4 figures, ICARSC2021 accepte

    Human-like movement of an anthropomorphic robot: problem revisited

    Get PDF
    Human-like movement is fundamental for natural human-robot interaction and collaboration. We have developed in a model for generating arm and hand movements an anthropomorphic robot. This model was inspired by the Posture-Based Motion-Planning Model of human reaching and grasping movements. In this paper we present some changes to the model we have proposed in [4] and test and compare different nonlinear constrained optimization techniques for solving the large-scale nonlinear constrained optimization problem that rises from the discretization of our time-continuous model. Furthermore, we test different time discretization steps.Eliana Costa e Silva was supported by FCT (grant: SFRH/BD/23821/2005)

    Energy Efficient Personalized Hand-Gesture Recognition with Neuromorphic Computing

    Full text link
    Hand gestures are a form of non-verbal communication that is used in social interaction and it is therefore required for more natural human-robot interaction. Neuromorphic (brain-inspired) computing offers a low-power solution for Spiking neural networks (SNNs) that can be used for the classification and recognition of gestures. This article introduces the preliminary results of a novel methodology for training spiking convolutional neural networks for hand-gesture recognition so that a humanoid robot with integrated neuromorphic hardware will be able to personalise the interaction with a user according to the shown hand gesture. It also describes other approaches that could improve the overall performance of the model

    A Multimodal Dataset for Object Model Learning from Natural Human-Robot Interaction

    Get PDF
    International audienceLearning object models in the wild from natural human interactions is an essential ability for robots to perform general tasks. In this paper we present a robocentric multimodal dataset addressing this key challenge. Our dataset focuses on interactions where the user teaches new objects to the robot in various ways. It contains synchronized recordings of visual (3 cameras) and audio data which provide a challenging evaluation framework for different tasks. Additionally, we present an end-to-end system that learns object models using object patches extracted from the recorded natural interactions. Our proposed pipeline follows these steps: (a) recognizing the interaction type, (b) detecting the object that the interaction is focusing on, and (c) learning the models from the extracted data. Our main contribution lies in the steps towards identifying the target object patches of the images. We demonstrate the advantages of combining language and visual features for the interaction recognition and use multiple views to improve the object modelling. Our experimental results show that our dataset is challenging due to occlusions and domain change with respect to typical object learning frameworks. The performance of common out-of-the-box classifiers trained on our data is low. We demonstrate that our algorithm outperforms such baselines
    • …
    corecore