11 research outputs found

    The use of gestures in computer aided design

    Get PDF
    Computer aided design systems are particularly useful in detailing, analysis and documentation but are not well-suited to the very early, conceptual aspects of design. This paper describes investigations of novel methods of interfacing between the designer and his computer system using stereotyped gestures to modify dimensional, positional and orientational parameters for simple three-dimensional geometric models. A prototype implementation using a virtual reality visualisation system enhanced by the provision of a six degree of freedom real-time tracking device is described

    An interactive editor for hand-sketched tables

    Get PDF
    The studies on the incremental graphic design using multimodal interfaces led us to realize an initial prototype : a speech end gestures table editor, TAPAGE. This paper presents the beautification of hand-sketched tables and the correction with gestural commands. Data processing and data structures were chosen according to several constraints : the variable quality of pen entries, the human-computer interaction context and the possible collaboration between the user and the system.Dans le cadre des interfaces multimodales d'aide à la conception incrémentale de documents graphiques, un prototype (TAPAGE) a été développé pour l'édition interactive de tableaux par la parole et le geste. Cet article présente les traitements qui permettent d'obtenir un tableau normalisé à partir d'un tracé à main levée et de le modifier par des commandes gestuelles. Les traitements et les représentations des données sont élaborés en fonction des différentes contraintes qu'imposent la qualité variable des tracés manuscrits, la situation d'interaction utilisateur-machine et l'objectif de coopération entre le dessinateur et le système d'interprétation des tracés

    Gestenerkennung mit einem Datenhandschuh

    Full text link
    "Gestik ist ein Teil der menschlichen Kommunikation. Wir setzen Gestik ein, um wort- und geräuschlos Signale auszusenden oder um sprachergänzend miteinander zu kommunizieren. Oftmals sind wir uns unserer Körpersprache jedoch nicht bewußt und üben gestisches Verhalten im Verständigungsprozeß automatisch aus. Inwieweit Gestik für die Interaktion mit Computern genutzt werden kann, ist bisher nur wenig erforscht, auch fehlt es an zuverlässigen, universellen Verfahren, mit denen menschliche Gesten vom Rechner erkannt und analysiert werden können. In diesem Artikel wird ein auf statistischen Methoden basierender Klassifizierer für Gesten vorgestellt." [Textauszug

    The State of Speech in HCI: Trends, Themes and Challenges

    Get PDF

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax

    Interacção gestual sem superfícies de apoio

    Get PDF
    Tese de mestrado em Engenharia Informática (Sistemas de Informação), apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2011Os periféricos de entrada deixaram de ser a única forma de transmitir intenç-¸ ões à máquina, sendo agora possível fazê-lo com o próprio corpo. Dispositivos que permitem interacção gestual sem recurso a periféricos intermediários têm vindo a aumentar, principalmente na área dos jogos. Esta tendência levanta várias questões a serem investigadas na área da interacção pessoa-máquina. A aproximação simplista de transferir conceitos de interacção do paradigma clássico WIMP, baseado nos dispositivos tradicionais de entrada, rato e teclado, rapidamente conduz a problemas inesperados. As características de uma interface concebida para uma interacção gestual em que não há contacto com nenhum dispositivo de entrada não se irão adequar ao paradigma utilizado nos últimos 40 anos. Estamos assim em condições de explorar como a interacção gestual com ou sem voz pode contribuir para minimizar os problemas com o paradigma clássico WIMP no tipo de interacção em que não há o contacto com nenhum periférico. Neste trabalho irá ser explorado o campo da interacção gestual, com ou sem voz. Através de aplicações pretende-se conduzir vários estudos de manipulação de objectos virtuais baseada em visão computacional. A manipulação dos objectos é realizada com dois modos de interacção (gestos e voz) podendo estes surgir integrados ou não. Pretende-se analisar se a interacção gestual é apelativa para os utilizadores para alguns tipos de aplicações e acções, enquanto para outros tipos, os gestos poderão não ser a modalidade preferida de interacção.The input peripherals aren’t anymore the only way to transmit intentions to the machine, being now possible to do it with our own body. The number of devices that allow gestural interaction, without the need of intermediate peripherals, are increasing, mainly in the area of video games. This tendency raises several questions that need to be investigated in the area of person-machine interaction. The simplistic approach of transferring interaction concepts from the classic paradigm WIMP, based on the traditional input devices, mouse and keyboard, quickly leads to unexpected problems. The characteristics of an interface conceived to a gestural interaction were there isn’t any kind of contact with an input device won’t suit with the paradigm of the last 40 years. So we’re in conditions to exploit how the gestural interaction can contribute to minimize the classic paradigm issues. In this work the field of gestural interaction, with and without voice, will be analyzed. Through the use of applications, it’s intended to lead various studies of virtual objects manipulation based on computational vision. The objects manipulation is done with two kinds of interactions, gestural and voice, that may emerge integrated our not. It’s intended to analyze if the gestural interaction is appealing to the users for some kind of applications and actions, while for other types, gestural may not be the preferred interaction modality

    Hand gesture recognition in uncontrolled environments

    Get PDF
    Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories
    corecore