1,632 research outputs found

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax

    Natural Walking in Virtual Reality:A Review

    Get PDF

    Design-led approach for transferring the embodied skills of puppet stop-motion animators into haptic workspaces

    Get PDF
    This design-led research investigates the transfer of puppet stop-motion animators’ embodied skills from the physical workspace into a digital environment. The approach is to create a digital workspace that evokes an embodied animating experience and allows puppet stop-motion animators to work in it unencumbered. The insights and outcomes of the practical explorations are discussed from the perspective of embodied cognition. The digital workspace employs haptic technology, an advanced multi-modal interface technology capable of invoking the tactile, kinaesthetic and proprioceptive senses. The overall aim of this research is to contribute, to the Human-Computer Interaction design community, design considerations and strategies for developing haptic workspaces that can seamlessly transfer and accommodate the rich embodied knowledge of non-digital skillful practitioners. Following an experiential design methodology, a series of design studies in collaboration with puppet stop-motion animators led to the development of a haptic workspace prototype for producing stop-motion animations. Each design study practically explored the transfer of different aspects of the puppet stop-motion animation practice into the haptic workspace. Beginning with an initial haptic workspace prototype, its design was refined in each study with the addition of new functionalities and new interaction metaphors which were always developed with the aim to create and maintain an embodied animating experience. The method of multiple streams of reflection was proposed as an important design tool for identifying, understanding and articulating design insights, empirical results and contextual considerations throughout the design studies. This thesis documents the development of the haptic workspace prototype and discusses the collected design insights and empirical results from the perspective of embodied cognition. In addition, it describes and reviews the design methodology that was adopted as an appropriate approach towards the design of the haptic workspace prototype

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Virtual reality as an educational tool in interior architecture

    Get PDF
    Ankara : The Department of Interior Architecture and Environmental Design and the Institute of Fine Arts of Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical references.This thesis discusses the use of virtual reality technology as an educational tool in interior architectural design. As a result of this discussion, it is proposed that virtual reality can be of use in aiding three-dimensional design and visualization, and may speed up the design process. It may also be of help in getting the designers/students more involved in their design projects. Virtual reality can enhance the capacity of designers to design in three dimensions. The virtual reality environment used in designing should be capable of aiding both the design and the presentation process. The tradeoffs of the technology, newly emerging trends and future directions in virtual reality are discussed.Aktaş, OrkunM.S

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI
    corecore