40,919 research outputs found
Real-Time Head Gesture Recognition on Head-Mounted Displays using Cascaded Hidden Markov Models
Head gesture is a natural means of face-to-face communication between people
but the recognition of head gestures in the context of virtual reality and use
of head gesture as an interface for interacting with virtual avatars and
virtual environments have been rarely investigated. In the current study, we
present an approach for real-time head gesture recognition on head-mounted
displays using Cascaded Hidden Markov Models. We conducted two experiments to
evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden
Markov Models and assessed the offline classification performance using
collected head motion data. In experiment 2, we characterized the real-time
performance of the approach by estimating the latency to recognize a head
gesture with recorded real-time classification data. Our results show that the
proposed approach is effective in recognizing head gestures. The method can be
integrated into a virtual reality system as a head gesture interface for
interacting with virtual worlds
A vision-based approach for human hand tracking and gesture recognition.
Hand gesture interface has been becoming an active topic of human-computer interaction (HCI). The utilization of hand gestures in human-computer interface enables human operators to interact with computer environments in a natural and intuitive manner. In particular, bare hand interpretation technique frees users from cumbersome, but typically required devices in communication with computers, thus offering the ease and naturalness in HCI. Meanwhile, virtual assembly (VA) applies virtual reality (VR) techniques in mechanical assembly. It constructs computer tools to help product engineers planning, evaluating, optimizing, and verifying the assembly of mechanical systems without the need of physical objects. However, traditional devices such as keyboards and mice are no longer adequate due to their inefficiency in handling three-dimensional (3D) tasks. Special VR devices, such as data gloves, have been mandatory in VA. This thesis proposes a novel gesture-based interface for the application of VA. It develops a hybrid approach to incorporate an appearance-based hand localization technique with a skin tone filter in support of gesture recognition and hand tracking in the 3D space. With this interface, bare hands become a convenient substitution of special VR devices. Experiment results demonstrate the flexibility and robustness introduced by the proposed method to HCI.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .L8. Source: Masters Abstracts International, Volume: 43-03, page: 0883. Adviser: Xiaobu Yuan. Thesis (M.Sc.)--University of Windsor (Canada), 2004
Gesture Based Control of Semi-Autonomous Vehicles
The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles, such as quadcopters, using realistic, physics based simulations. This involves identifying natural gestures to control basic functions of a vehicle, such as maneuvering and onboard equipment operation, and building simulations using the Unity game engine to investigate preferred use of those gestures. In addition to creating a realistic operating experience, human factors associated with limitations on physical hand motion and information management are also considered in the simulation development process. Testing with external participants using a recreational quadcopter simulation built in Unity was conducted to assess the suitability of the simulation and preferences between a joystick approach and the gesture-based approach. Initial feedback indicated that the simulation represented the actual vehicle performance well and that the joystick is preferred over the gesture-based approach. Improvements in the gesture-based control are documented as additional features in the simulation, such as basic maneuver training and additional vehicle positioning information, are added to assist the user to better learn the gesture-based interface and implementation of active control concepts to interpret and apply vehicle forces and torques. Tests were also conducted with an actual ground vehicle to investigate if knowledge and skill from the simulated environment transfers to a real-life scenario. To assess this, an immersive virtual reality (VR) simulation was built in Unity as a training environment to learn how to control a remote control car using gestures. This was then followed by a control of the actual ground vehicle. Observations and participant feedback indicated that range of hand movement and hand positions transferred well to the actual demonstration. This illustrated that the VR simulation environment provides a suitable learning experience, and an environment from which to assess human performance; thus, also validating the observations from earlier tests. Overall results indicate that the gesture-based approach holds promise given the emergence of new technology, but additional work needs to be pursued. This includes algorithms to process gesture data to provide more stable and precise vehicle commands and training environments to familiarize users with this new interface concept
Affordances and Safe Design of Assistance Wearable Virtual Environment of Gesture
Safety and reliability are the main issues for designing assistance wearable
virtual environment of technical gesture in aerospace, or health application
domains. That needs the integration in the same isomorphic engineering
framework of human requirements, systems requirements and the rationale of
their relation to the natural and artifactual environment.To explore coupling
integration and design functional organization of support technical gesture
systems, firstly ecological psychologyprovides usa heuristicconcept: the
affordance. On the other hand mathematical theory of integrative physiology
provides us scientific concepts: the stabilizing auto-association principle and
functional interaction.After demonstrating the epistemological consistence of
these concepts, we define an isomorphic framework to describe and model human
systems integration dedicated to human in-the-loop system engineering.We
present an experimental approach of safe design of assistance wearable virtual
environment of gesture based in laboratory and parabolic flights. On the
results, we discuss the relevance of our conceptual approach and the
applications to future assistance of gesture wearable systems engineering
ANGELICA : choice of output modality in an embodied agent
The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality
Ambient Gestures
We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
- …