27 research outputs found

    Improved estimation of hidden Markov model parameters from multiple observation sequences

    Get PDF
    The huge popularity of Hidden Markov models in pattern recognition is due to the ability to 'learn' model parameters from an observation sequence through Baum-Welch and other re-estimation procedures. In the case of HMM parameter estimation from an ensemble of observation sequences, rather than a single sequence, we require techniques for finding the parameters which maximize the likelihood of the estimated model given the entire set of observation sequences. The importance of this study is that HMMs with parameters estimated from multiple observations are shown to be many orders of magnitude more probable than HMM models learned from any single observation sequence - thus the effectiveness of HMM 'learning' is greatly enhanced. In this paper, we present techniques that usually find models significantly more likely than Rabiner's well-known method on both seen and unseen sequences

    RoboTalk - Prototyping a Humanoid Robot as Speech-to-Sign Language Translator

    Get PDF
    Information science mostly focused on sign language recognition. The current study instead examines whether humanoid robots might be fruitful avatars for sign language translation. After a review of research into sign language technologies, a survey of 50 deaf participants regarding their preferences for potential reveals that humanoid robots represent a promising option. The authors also 3D-printed two arms of a humanoid robot, InMoov, with special joints for the index finger and thumb that would provide it with additional degrees of freedom to express sign language. They programmed the robotic arms with German sign language and integrated it with a voice recognition system. Thus this study provides insights into human–robot interactions in the context of sign language translation; it also contributes ideas for enhanced inclusion of deaf people into society

    Prosody-Based Adaptive Metaphoric Head and Arm Gestures Synthesis in Human Robot Interaction

    Get PDF
    International audienceIn human-human interaction, the process of communication can be established through three modalities: verbal, non-verbal (i.e., gestures), and/or para-verbal (i.e., prosody). The linguistic literature shows that the para-verbal and non-verbal cues are naturally aligned and synchronized, however the natural mechanism of this synchronization is still unexplored. The difficulty encountered during the coordination between prosody and metaphoric head-arm gestures concerns the conveyed meaning , the way of performing gestures with respect to prosodic characteristics, their relative temporal arrangement, and their coordinated organization in the phrasal structure of utterance. In this research, we focus on the mechanism of mapping between head-arm gestures and speech prosodic characteristics in order to generate an adaptive robot behavior to the interacting human's emotional state. Prosody patterns and the motion curves of head-arm gestures are aligned separately into parallel Hidden Markov Models (HMM). The mapping between speech and head-arm gestures is based on the Coupled Hidden Markov Models (CHMM), which could be seen as a multi-stream collection of HMM, characterizing the segmented prosody and head-arm gestures' data. An emotional state based audio-video database has been created for the validation of this study. The obtained results show the effectiveness of the proposed methodology

    Prospects of Implementing a Vhand Glove as a Robotic Controller

    Get PDF
    The Tower is an official publication of the Georgia Tech Office of Student Media and is sponsored by the Undergraduate Research Opportunities Program and the Georgia Tech Library. This article appeared in Volume 3, pages 43-51.There are numerous approaches and systems for implementing a robot controller. This project investigates the potential of using the VHand Motion Capturing Glove, developed by DGTech, as a means of controlling a programmable robot. A GUI-based application was utilized to identify and subsequently reflect the extended or closed state of each finger on the glove hand. A calibration algorithm was implemented on the existing application source code in order to increase the precision of the recognition of extended or closed finger positions as well as enhance the efficiency of the hand signal interpretation. Furthermore, manipulations were made to the scan rate and sample size of the bit signal coming from the glove to improve the accuracy of recognizing dynamic hand signals or defined signals containing sequential finger positions. An attempt was made to sync the VHand glove signals to a Scribbler robot by writing the recognized hand signals to a text file which were simultaneously read by a Python-based application. The Python application subsequently transmitted commands to the Scribbler robot via a Bluetooth serial link. However, there was difficulty in achieving real-time communication between the VHand glove and the Scribbler robot, most likely due to unidentified runtime errors in the VHand signal interpretation code.Office of Student Media; Undergraduate Research Opportunities Program; Georgia Tech Library

    Stochastic Gesture Production and Recognition Model for a Humanoid Robot

    Get PDF
    Robot Programming by Demonstration (PbD) aims at developing adaptive and robust controllers to enable the robot to learn new skills by observing and imitating a human demonstration. While the vast majority of PbD works focused on systems that learn a specific subset of tasks, our work explores the problem of recognition, generalization, and reproduction of tasks in a unified mathematical framework. The approach makes abstraction of the task and dataset at hand to tackle the general issue of learning which of the features are the relevant ones to imitate. In this paper, we present an implementation of this framework to the determination of the optimal strategy to reproduce arbitrary gestures. The model is tested and validated on a humanoid robot, using recordings of the kinematics of the demonstrator's arm motion. The hand path and joint angle trajectories are encoded in Hidden Markov Models. The system uses the optimal prediction of the models to generate the reproduction of the motion

    Towards Guaranteeing Safe and Efficient Human-Robot Collaboration Using Human Intent Prediction

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/97120/1/AIAA2012-5317.pd

    Human Intent Prediction Using Markov Decision Processes

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/97080/1/AIAA2012-2445.pd

    Face and Gesture Recognition for Human-Robot Interaction

    Get PDF

    Monitoring and Managing Interaction Patterns in Human-Robot Interaction

    Get PDF
    Nowadays, one of the most challenging problems in Human-Robot Interaction (HRI) is to make robots able to understand humans to successfully accomplish tasks in human environments. HRI has a very different role in all the robotics fields. While autonomous robots do not require a complex HRI system, it is of vital importance for service robots. The goal of this thesis is to study if behavioural patterns that users unconsciously apply when interacting with a robot can be useful to recognise the users' intentions in a particular situation. To carry out this study a prototype has been developed to test in an automatic and objective way, if those interaction patterns performed by several users in the area of service robots are useful to recognise their intentions and disambiguate unclear situations.By using verbal and non-verbal communication that the user unconsciously applies when interacting with a robot, we want to determine automatically what the user is trying to present
    corecore