8,965 research outputs found

    Is movement better? Comparing sedentary and motion-based game controls for older adults

    Get PDF
    Providing cognitive and physical stimulation for older adults is critical for their well-being. Video games offer the opportunity of engaging seniors, and research has shown a variety of positive effects of motion-based video games for older adults. However, little is known about the suitability of motion-based game controls for older adults and how their use is affected by age-related changes. In this paper, we present a study evaluating sedentary and motion-based game controls with a focus on differences between younger and older adults. Our results show that older adults can apply motion-based game controls efficiently, and that they enjoy motion-based interaction. We present design implications based on our study, and demonstrate how our findings can be applied both to motion-based game design and to general interaction design for older adults. Copyright held by authors

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Study of the interaction with a virtual 3D environment displayed on a smartphone

    Get PDF
    Les environnements virtuels à 3D (EV 3D) sont de plus en plus utilisés dans différentes applications telles que la CAO, les jeux ou la téléopération. L'évolution des performances matérielles des Smartphones a conduit à l'introduction des applications 3D sur les appareils mobiles. En outre, les Smartphones offrent de nouvelles capacités bien au-delà de la communication vocale traditionnelle qui sont consentis par l'intégrité d'une grande variété de capteurs et par la connectivité via Internet. En conséquence, plusieurs intéressantes applications 3D peuvent être conçues en permettant aux capacités de l'appareil d'interagir dans un EV 3D. Sachant que les Smartphones ont de petits et aplatis écrans et que EV 3D est large, dense et contenant un grand nombre de cibles de tailles différentes, les appareils mobiles présentent certaines contraintes d'interaction dans l'EV 3D comme : la densité de l'environnement, la profondeur de cibles et l'occlusion. La tâche de sélection fait face à ces trois problèmes pour sélectionner une cible. De plus, la tâche de sélection peut être décomposée en trois sous-tâches : la Navigation, le Pointage et la Validation. En conséquence, les chercheurs dans un environnement virtuel 3D ont développé de nouvelles techniques et métaphores pour l'interaction en 3D afin d'améliorer l'utilisation des applications 3D sur les appareils mobiles, de maintenir la tâche de sélection et de faire face aux problèmes ou facteurs affectant la performance de sélection. En tenant compte de ces considérations, cette thèse expose un état de l'art des techniques de sélection existantes dans un EV 3D et des techniques de sélection sur Smartphone. Il expose les techniques de sélection dans un EV 3D structurées autour des trois sous-tâches de sélection: navigation, pointage et validation. En outre, il décrit les techniques de désambiguïsation permettant de sélectionner une cible parmi un ensemble d'objets présélectionnés. Ultérieurement, il expose certaines techniques d'interaction décrites dans la littérature et conçues pour être implémenter sur un Smartphone. Ces techniques sont divisées en deux groupes : techniques effectuant des tâches de sélection bidimensionnelle sur un Smartphone et techniques exécutant des tâches de sélection tridimensionnelle sur un Smartphone. Enfin, nous exposons les techniques qui utilisaient le Smartphone comme un périphérique de saisie. Ensuite, nous discuterons la problématique de sélection dans un EV 3D affichée sur un Smartphone. Il expose les trois problèmes identifiés de sélection : la densité de l'environnement, la profondeur des cibles et l'occlusion. Ensuite, il établit l'amélioration offerte par chaque technique existante pour la résolution des problèmes de sélection. Il analyse les atouts proposés par les différentes techniques, la manière dont ils éliminent les problèmes, leurs avantages et leurs inconvénients. En outre, il illustre la classification des techniques de sélection pour un EV 3D en fonction des trois problèmes discutés (densité, profondeur et occlusion) affectant les performances de sélection dans un environnement dense à 3D. Hormis pour les jeux vidéo, l'utilisation d'environnement virtuel 3D sur Smartphone n'est pas encore démocratisée. Ceci est dû au manque de techniques d'interaction proposées pour interagir avec un dense EV 3D composé de nombreux objets proches les uns des autres et affichés sur un petit écran aplati et les problèmes de sélection pour afficher l' EV 3D sur un petit écran plutôt sur un grand écran. En conséquence, cette thèse se concentre sur la proposition et la description du fruit de cette étude : la technique d'interaction DichotoZoom. Elle compare et évalue la technique proposée à la technique de circulation suggérée par la littérature. L'analyse comparative montre l'efficacité de la technique DichotoZoom par rapport à sa contrepartie. Ensuite, DichotoZoom a été évalué selon les différentes modalités d'interaction disponibles sur les Smartphones. Cette évaluation montre la performance de la technique de sélection proposée basée sur les quatre modalités d'interaction suivantes : utilisation de boutons physiques ou sous forme de composants graphiques, utilisation d'interactions gestuelles via l'écran tactile ou le déplacement de l'appareil lui-même. Enfin, cette thèse énumère nos contributions dans le domaine des techniques d'interaction 3D utilisées dans un environnement virtuel 3D dense affiché sur de petits écrans et propose des travaux futurs.3D Virtual Environments (3D VE) are more and more used in different applications such as CAD, games, or teleoperation. Due to the improvement of smartphones hardware performance, 3D applications were also introduced to mobile devices. In addition, smartphones provide new computing capabilities far beyond the traditional voice communication. They are permitted by the variety of built-in sensors and the internet connectivity. In consequence, interesting 3D applications can be designed by enabling the device capabilities to interact in a 3D VE. Due to the fact that smartphones have small and flat screens and that a 3D VE is wide and dense with a large number of targets of various sizes, mobile devices present some constraints in interacting on the 3D VE like: the environment density, the depth of targets and the occlusion. The selection task faces these three problems to select a target. In addition, the selection task can be decomposed into three subtasks: Navigation, Pointing and Validation. In consequence, researchers in 3D virtual environment have developed new techniques and metaphors for 3D interaction to improve 3D application usability on mobile devices, to support the selection task and to face the problems or factors affecting selection performance. In light of these considerations, this thesis exposes a state of the art of the existing selection techniques in 3D VE and the selection techniques on smartphones. It exposes the selection techniques in 3D VE structured around the selection subtasks: navigation, pointing and validation. Moreover, it describes disambiguation techniques providing the selection of a target from a set of pre-selected objects. Afterward, it exposes some interaction techniques described in literature and designed for implementation on Smartphone. These techniques are divided into two groups: techniques performing two-dimensional selection tasks on smartphones, and techniques performing three-dimensional selection tasks on smartphones. Finally, we expose techniques that used the smartphone as an input device. Then, we will discuss the problematic of selecting in 3D VE displayed on a Smartphone. It exposes the three identified selection problems: the environment density, the depth of targets and the occlusion. Afterward, it establishes the enhancement offered by each existing technique in solving the selection problems. It analysis the assets proposed by different techniques, the way they eliminates the problems, their advantages and their inconvenient. Furthermore, it illustrates the classification of the selection techniques for 3D VE according to the three discussed problems (density, depth and occlusion) affecting the selection performance in a dense 3D VE. Except for video games, the use of 3D virtual environment (3D VE) on Smartphone has not yet been popularized. This is due to the lack of interaction techniques to interact with a dense 3D VE composed of many objects close to each other and displayed on a small and flat screen and the selection problems to display the 3D VE on a small screen rather on a large screen. Accordingly, this thesis focuses on defining and describing the fruit of this study: DichotoZoom interaction technique. It compares and evaluates the proposed technique to the Circulation technique, suggested by the literature. The comparative analysis shows the effectiveness of DichotoZoom technique compared to its counterpart. Then, DichotoZoom was evaluated in different modalities of interaction available on Smartphones. It reports on the performance of the proposed selection technique based on the following four interaction modalities: using physical buttons, using graphical buttons, using gestural interactions via touchscreen or moving the device itself. Finally, this thesis lists our contributions to the field of 3D interaction techniques used in a dense 3D virtual environment displayed on small screens and proposes some future works

    A vision-based approach for human hand tracking and gesture recognition.

    Get PDF
    Hand gesture interface has been becoming an active topic of human-computer interaction (HCI). The utilization of hand gestures in human-computer interface enables human operators to interact with computer environments in a natural and intuitive manner. In particular, bare hand interpretation technique frees users from cumbersome, but typically required devices in communication with computers, thus offering the ease and naturalness in HCI. Meanwhile, virtual assembly (VA) applies virtual reality (VR) techniques in mechanical assembly. It constructs computer tools to help product engineers planning, evaluating, optimizing, and verifying the assembly of mechanical systems without the need of physical objects. However, traditional devices such as keyboards and mice are no longer adequate due to their inefficiency in handling three-dimensional (3D) tasks. Special VR devices, such as data gloves, have been mandatory in VA. This thesis proposes a novel gesture-based interface for the application of VA. It develops a hybrid approach to incorporate an appearance-based hand localization technique with a skin tone filter in support of gesture recognition and hand tracking in the 3D space. With this interface, bare hands become a convenient substitution of special VR devices. Experiment results demonstrate the flexibility and robustness introduced by the proposed method to HCI.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .L8. Source: Masters Abstracts International, Volume: 43-03, page: 0883. Adviser: Xiaobu Yuan. Thesis (M.Sc.)--University of Windsor (Canada), 2004
    • …
    corecore