22,650 research outputs found

    Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data

    Get PDF
    Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of activity classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using the proposed approach designed to merge features with different semantics and proximity priors improves the classification performance in terms of precision, recall and accuracy when compared with other approaches that employ alternative strategies

    Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models

    Full text link
    Recent progress in human-robot collaboration makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMP) model human trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions handle different objects. Under current approaches, a robot would not adapt its pose and dynamics for proper handling. We integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize muscular signals to augment the human observation representation. The contribution of our paper is increased task discernment when trajectories are similar but tools are different and require the robot to adjust its pose for proper handling. Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding the best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201

    Semantic grid map building

    Full text link
    Conventional Occupancy Grid (OG) map which contains occupied and unoccupied cells can be enhanced by incorporating semantic labels of places to build semantic grid map. Map with semantic information is more understandable to humans and hence can be used for efficient communication, leading to effective human robot interactions. This paper proposes a new approach that enables a robot to explore an indoor environment to build an occupancy grid map and then perform semantic labeling to generate a semantic grid map. Geometrical information is obtained by classifying the places into three different semantic classes based on data collected by a 2D laser range finder. Classification is achieved by implementing logistic regression as a multi-class classifier, and the results are combined in a probabilistic framework. Labeling accuracy is further improved by topological correction on robot position map which is an intermediate product, and also by outlier removal process on semantic grid map. Simulation on data collected in a university environment shows appealing results

    Language-based sensing descriptors for robot object grounding

    Get PDF
    In this work, we consider an autonomous robot that is required to understand commands given by a human through natural language. Specifically, we assume that this robot is provided with an internal representation of the environment. However, such a representation is unknown to the user. In this context, we address the problem of allowing a human to understand the robot internal representation through dialog. To this end, we introduce the concept of sensing descriptors. Such representations are used by the robot to recognize unknown object properties in the given commands and warn the user about them. Additionally, we show how these properties can be learned over time by leveraging past interactions in order to enhance the grounding capabilities of the robot

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic
    corecore