6 research outputs found

    Towards Developing an Effective Hand Gesture Recognition System for Human Computer Interaction: A Literature Survey

    Get PDF
    Gesture recognition is a mathematical analysis of movement of body parts (hand / face) done with the help of computing device. It helps computers to understand human body language and build a more powerful link between humans and machines. Many research works are developed in the field of hand gesture recognition. Each works have achieved different recognition accuracies with different hand gesture datasets, however most of the firms are having insufficient insight to develop necessary achievements to meet their development in real time datasets. Under such circumstances, it is very essential to have a complete knowledge of recognition methods of hand gesture recognition, its strength and weakness and the development criteria as well. Lots of reports declare its work to be better but a complete relative analysis is lacking in these works. In this paper, we provide a study of representative techniques for hand gesture recognition, recognition methods and also presented a brief introduction about hand gesture recognition. The main objective of this work is to highlight the position of various recognition techniqueswhich can indirectly help in developing new techniques for solving the issues in the hand gesture recognition systems. Moreover we present a concise description about the hand gesture recognition systems recognition methods and the instructions for future research

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data

    Safe and Efficient Robot Action Choice Using Human Intent Prediction in Physically-Shared Space Environments.

    Full text link
    Emerging robotic systems are capable of autonomously planning and executing well-defined tasks, particularly when the environment can be accurately modeled. Robots supporting human space exploration must be able to safely interact with human astronaut companions during intravehicular and extravehicular activities. Given a shared workspace, efficiency can be gained by leveraging robotic awareness of its human companion. This dissertation presents a modular architecture that allows a human and robotic manipulator to efficiently complete independent sets of tasks in a shared physical workspace without the robot requiring oversight or situational awareness from its human companion. We propose that a robot requires four capabilities to act safely and optimally with awareness of its companion: sense the environment and the human within it; translate sensor data into a form useful for decision-making; use this data to predict the human’s future intent; and then use this information to inform its action-choice based also on the robot’s goals and safety constraints. We first present a series of human subject experiments demonstrating that human intent can help a robot predict and avoid conflict, and that sharing the workspace need not degrade human performance so long as the manipulator does not distract or introduce conflict. We describe an architecture that relies on Markov Decision Processes (MDPs) to support robot decision-making. A key contribution of our architecture is its decomposition of the decision problem into two parts: human intent prediction (HIP) and robot action choice (RAC). This decomposition is made possible by an assumption that the robot’s actions will not influence human intent. Presuming an observer that can feedback human actions in real-time, we leverage the well-known space environment and task scripts astronauts rehearse in advance to devise models for human intent prediction and robot action choice. We describe a series of case studies for HIP and RAC using a minimal set of state attributes, including an abbreviated action-history. MDP policies are evaluated in terms of model fitness and safety/efficiency performance tradeoffs. Simulation results indicate that incorporation of both observed and predicted human actions improves robot action choice. Future work could extend to more general human-robot interaction.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107160/1/cmcghan_1.pd
    corecore