350 research outputs found

    Service Humanoid Robotics: Review and Design of A Novel Bionic-Companionship Framework

    Get PDF
    At present, industrial robotics focused more on motion control and vision; whereas Humanoid Service Robotics (HSRs) are increasingly being investigated among researchers' and practitioners' field of speech interactions. The problematic and quality of human-robot interaction (HRI) has become one of the hot potatoes concerned in academia. This paper proposes a novel interactive framework suitable for HSRs. The proposed framework is grounded on the novel integration of Trevarthen Companionship Theory and neural image generation algorithm in computer vision. By integrating the image-to-natural interactivities generation, and communicate with the environment to better interact with the stakeholder, thereby changing from interaction to a bionic-companionship. In addition, the article also reviews the research of neural image generation algorithms and summarizes the application cases of the algorithm structure in the field of robotics from a critical perspective. We believe that the new interactive bionic-companionship framework can enable HSRs to further develop towards robot companions

    Action-oriented Scene Understanding

    Get PDF
    In order to allow robots to act autonomously it is crucial that they do not only describe their environment accurately but also identify how to interact with their surroundings. While we witnessed tremendous progress in descriptive computer vision, approaches that explicitly target action are scarcer. This cumulative dissertation approaches the goal of interpreting visual scenes “in the wild” with respect to actions implied by the scene. We call this approach action-oriented scene understanding. It involves identifying and judging opportunities for interaction with constituents of the scene (e.g. objects and their parts) as well as understanding object functions and how interactions will impact the future. All of these aspects are addressed on three levels of abstraction: elements, perception and reasoning. On the elementary level, we investigate semantic and functional grouping of objects by analyzing annotated natural image scenes. We compare object label-based and visual context definitions with respect to their suitability for generating meaningful object class representations. Our findings suggest that representations generated from visual context are on-par in terms of semantic quality with those generated from large quantities of text. The perceptive level concerns action identification. We propose a system to identify possible interactions for robots and humans with the environment (affordances) on a pixel level using state-of-the-art machine learning methods. Pixel-wise part annotations of images are transformed into 12 affordance maps. Using these maps, a convolutional neural network is trained to densely predict affordance maps from unknown RGB images. In contrast to previous work, this approach operates exclusively on RGB images during both, training and testing, and yet achieves state-of-the-art performance. At the reasoning level, we extend the question from asking what actions are possible to what actions are plausible. For this, we gathered a dataset of household images associated with human ratings of the likelihoods of eight different actions. Based on the judgement provided by the human raters, we train convolutional neural networks to generate plausibility scores from unseen images. Furthermore, having considered only static scenes previously in this thesis, we propose a system that takes video input and predicts plausible future actions. Since this requires careful identification of relevant features in the video sequence, we analyze this particular aspect in detail using a synthetic dataset for several state-of-the-art video models. We identify feature learning as a major obstacle for anticipation in natural video data. The presented projects analyze the role of action in scene understanding from various angles and in multiple settings while highlighting the advantages of assuming an action-oriented perspective. We conclude that action-oriented scene understanding can augment classic computer vision in many real-life applications, in particular robotics
    corecore