14 research outputs found

    Using marking menus to develop command sets for computer vision based hand gesture interfaces

    Get PDF

    Usability of vision-based interfaces

    Get PDF
    Vision-based interfaces can employ gestures to interact with an interactive system without touching it. Gestures are frequently modelled in laboratories, and usability testing should be carried out. However, often these interfaces present usability issues, and the great diversity of uses of these interfaces and the applications where they are used, makes it difficult to decide which factors to take into account in a usability test. In this paper, we review the literature to compile and analyze the usability factors and metrics used for vision-based interfaces.Postprint (published version

    Bimanual marking menu for near surface interactions

    Get PDF
    ABSTRACT We describe a mouseless, near-surface version of the Bimanual Marking Menu system. To activate the menu system, users create a pinch gesture with either their index or middle finger to initiate a left click or right click. Then they mark in the 3D space near the interactive area. We demonstrate how the system can be implemented using a commodity range camera such as the Microsoft Kinect, and report on several designs of the 3D marking system. Like the multi-touch marking menu, our system offers a large number of accessible commands. Since it does not rely on contact points to operate, our system leaves the nondominant hand available for other multi-touch interactions

    Comparing Free Hand Menu Techniques for Distant Displays using Linear, Marking and Finger-Count Menus

    Get PDF
    Part 1: Long and Short PapersInternational audienceDistant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-OZ protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.</p

    Using Marking Menus to Develop Command Sets for Computer Vision Based Hand Gesture Interfaces

    No full text
    This paper presents the first stages of a project that studies the use of hand gestures for interaction, in an approach based on computer vision. A first prototype for exploring the use of marking menus for interaction has been built. The purpose is not menu-based interaction per se, but to study if marking menus, with practice, could support the development of autonomous command sets for gestural interaction. Some early observations are reported, mainly concerning problems with user fatigue and precision of gestures. Future work is discussed, such as introducing flow menus for reducing fatigue, and control menus for continuous control functions. The computer vision algorithm

    La usabilidad de las interfaces basadas en visión

    Get PDF
    Las interfaces basadas en visión hacen uso de gestos para la comunicación del usuario con el sistema interactivo sin la necesidad de dispositivos que requieran contacto físico. El modelado de gestos suele realizarse en laboratorios y es importante que se lleve a cabo la evaluación de su usabilidad, pero la gran diversidad de usos y aplicaciones de estas interfaces hace que resulte difícil decidir qué factores tener en cuenta cuando se mide la usabilidad. En este artículo se presenta una revisión de la literatura cuyo objetivo es clasificar i recopilar los factores y las métricas de usabilidad que se utilizan para validar las interfaces basadas en visión.Peer ReviewedPostprint (published version

    Interacção por toque em múltiplas superfícies

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, Faculdade de Ciências, 2009Actualmente, com o preço dos pixéis a diminuir, os ecrãs de computador tendem a aumentar de tamanho. Os ecrãs de parede e outras superfícies de interacção de grande dimensão são agora uma opção para muitos utilizadores. Esta tendência levanta várias questões a serem investigadas na área da interacção pessoa-máquina. A aproximação simplista de transferir os principais conceitos de interacção do paradigma clássico WIMP (Windows, Icon, Menu, Pointer / Janela, Ícone, Menu, Dispositivo Apontador), baseado nos dispositivos tradicionais de entrada, rato e teclado, rapidamente conduz a problemas inesperados. Nos últimos anos também se assistiu à emergência dos primeiros produtos comerciais a suportarem interacções multi-toque. É expectável que a tecnologia do toque se torne brevemente standard, o que já é visível em alguns mercados específicos, como o dos telemóveis. Se juntarmos as possibilidades criadas pela recente “revolução táctil” e a transição a que assistimos nos últimos anos para ecrãs de grande dimensão, estamos agora em condições de explorar como a interacção gestual pode contribuir para minimizar os problemas com o paradigma clássico WIMP em ecrãs de grande dimensão. Neste trabalho é explorado o campo da interacção gestual em ecrãs de grande dimensão. Foram conduzidos vários estudos, onde os utilizadores experimentaram interacção gestual, em várias aplicações adequadas para ecrãs de grande dimensão. Os resultados demonstram que a manipulação directa através de interacção gestual é apelativa aos utilizadores para alguns tipos de aplicações e acções, enquanto para outros tipos, os gestos não são a modalidade preferida de interacção. Posteriormente, introduziu-se o uso da interacção gestual para cenários cooperativos, discutindo a sua adequação a algumas tarefas, e a forma como os utilizadores decidem cooperativamente que tarefas realizar baseados nas modalidades de entrada disponíveis e nas características das tarefas.Nowadays, with pixels getting cheaper, computer displays tend toward larger sizes. Wall sized screens and other large interaction surfaces are now an option for many users and this trend raises a number of issues to be researched in the user interface area. The simplistic approach of transferring the main interaction concepts of the classic WIMP (Window, Icon, Menu, Pointer) design paradigm, based on the traditional mouse and keyboard input devices, quickly led to unexpected problems. In recent years we also witnessed a revolution with the emergence of the first commercial products supporting multi-touch interaction. It is expected that the use of touch technology will soon become standard, and this is already visible in some specific markets, such as mobile phones. If we put together the possibilities opened up by the recent “touch revolution” and the transition we have been witnessing for the past few years to large screen displays, we are now able to explore how the use of gestural interaction can contribute to overcome the problems with the classical WIMP paradigm in large screen displays. In this work we explore the field of gestural interaction on large screen displays, conducting several studies where users experience gestural interaction in various applications suited for large displays. Results show how direct manipulation through gestural interaction appeals to users for some types of applications and actions, while demonstrating that for other types, gestures should not be the preferred interaction modality. Afterward, we introduce the use of gestural interaction for cooperative scenarios, discussing how it is more suited for some tasks, and hypothesizing on how users cooperatively decide on which tasks to perform based on the available input modalities and task characteristics

    Estudo de modos de comando em cenários de interacção gestual

    Get PDF
    Tese de mestrado, Engenharia Informática (Sistemas de Informação), Universidade de Lisboa, Faculdade de Ciências, 2010Recentemente, tem-se assistido a uma “revolução tecnológica” na concepção de dispositivos computacionais que visam a interacção pessoa-máquina. Os periféricos de entrada deixaram de ser a única forma de transmitir intenções às máquinas, sendo agora possível fazê-lo com o próprio corpo. Dispositivos que permitem interacção por toque estão-se a disseminar por locais públicos, mas não é só nestes locais que o fenómeno se verifica. A quantidade de produtos comerciais que permitem este género de interacção também não pára de aumentar, pelo que é necessário compreender as vantagens e desvantagens da interacção gestual e torná-la cada vez mais eficaz. Existem muitas tecnologias que possibilitam a construção de dispositivos tácteis, variando nas suas capacidades e custos. O estudo dessas tecnologias, no decorrer deste trabalho, resultou na construção de uma mesa interactiva multi-toque de “baixo custo”. Nos dispositivos vocacionados para interacção gestual as dimensões da superfície com a qual é possível interagir são iguais às dimensões do ecrã, o que leva à necessidade de ter uma especial atenção na concepção de aplicações para estes dispositivos. As características de uma interface concebida para um ecrã de grandes dimensões poderão não ser adequadas para um ecrã de dimensões mais reduzidas, e vice-versa. Além das dimensões, o género de aplicação também influencia o paradigma de interacção. No caso específico de interacção gestual em aplicações de desenho existe a dificuldade acrescida da aplicação compreender quando o gesto do utilizador tem por objectivo desenhar ou executar um comando. Neste trabalho são apresentados dois conjuntos de gestos de comando com o objectivo de eliminar a ambiguidade existente entre os gestos em aplicações de desenho. São também apresentadas as conclusões de estudos conduzidos para atestar a qualidade dos conjuntos propostos, assim como a sua adequabilidade relativamente a diferentes dimensões de ecrã.Lately we’ve been witnessing a “technologic revolution” in the making of devices that allow human-computer interaction. Input devices are no longer the only way to instruct intentions to computers. It’s now possible to do the same using one's own body. Devices that allow touch interaction are being disseminated in public places, but it’s not only in those places that the phenomenon occurs. The number of commercial products that allow this kind of interaction doesn’t stop growing. So, it’s of vital importance to understand the advantages and disadvantages of gestural interaction and make it more effective. There are a lot of technologies that allow the construction of tactile devices, going through a wide range of capabilities and manufacturing costs. The study of those technologies, during this work, resulted in the construction of a “low-cost” multi-touch interactive table. In devices oriented for gestural interaction, the dimensions of the surface of interaction are equal to the dimensions of the screen, which demands a special attention in the design of applications for those devices. The features of an interface conceived for a large screen may not be suitable for a screen of smaller dimensions, and vice-versa. Apart from the dimensions, the kind of application can also influence the interaction paradigm. In the specific case of gestural interaction in drawing applications there’s also the increased difficulty of making the application understand when the gesture has the objective of drawing or, instead, executing a command. In this work, two sets of command gestures are introduced, with the goal of disambiguating the intent of gestures in drawing applications. Also presented are the conclusions of studies who aimed to test the quality of the proposed sets, as well as their suitability to multi-sized screens

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore