7 research outputs found

    Expressy : Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to Touch-based Interactions

    Get PDF
    Expressiveness, which we define as the extent to which rich and complex intent can be conveyed through action, is a vital aspect of many human interactions. For instance, paint on canvas is said to be an expressive medium, because it affords the artist the ability to convey multifaceted emotional intent through intricate manipulations of a brush. To date, touch devices have failed to offer users a level of expressiveness in their interactions that rivals that experienced by the painter and those completing other skilled physical tasks. We investigate how data about hand movement – provided by a motion sensor, similar to those found in many smart watches or fitness trackers – can be used to expand the expressiveness of touch interactions. We begin by introducing a conceptual model that formalizes a design space of possible expressive touch interactions. We then describe and evaluate Expressy, an approach that uses a wrist-worn inertial measurement unit to detect and classify qualities of touch interaction that extend beyond those offered by today’s typical sensing hardware. We conclude by describing a number of sample applications, which demonstrate the enhanced, expressive interaction capabilities made possible by Expressy

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Définition d'un langage et d'une méthode pour la description et la spécification d'IHM post-W.I.M.P. pour les cockpits interactifs

    Get PDF
    Avec l'apparition de nouvelles technologies comme l'iPad, etc., nous rencontrons dans les logiciels grand public des interfaces de plus en plus riches et innovantes. Ces innovations portent à la fois sur la gestion des entrées (e. g. écrans multi-touch) et sur la gestion des sorties (e.g. affichage). Ces interfaces sont catégorisées de type post-WIMP et permettent d'accroitre la bande passante entre l'utilisateur et le système qu'il manipule. Plus précisément elles permettent à l'utilisateur de fournir plus rapidement des commandes au système et au système de présenter plus d'informations à l'utilisateur lui permettant par là-même de superviser des systèmes de complexité accrue. L'adoption par le grand public et le niveau de maturité de ces technos permet d'envisager leur intégration dans les systèmes critiques (comme les cockpits ou de façon plus générale les systèmes de commande et contrôle). Toutefois les aspects logiciels liés à ces technologies sont loin d'être maîtrisés comme le démontrent les nombreux dysfonctionnements rencontrés par leurs utilisateurs. Alors que ces derniers peuvent être tolérés pour des applications de jeux ou de divertissement elles ne sont pas acceptables dans le domaine des systèmes critiques présentés précédemment. La problématique de cette thèse porte précisément sur le développement de méthodes, langages, techniques et outils pour la conception et le développement de systèmes interactifs innovants et fiables. La contribution de cette thèse porte sur l'extension d'une notation formelle : ICO (Objets Coopératifs Interactifs) pour décrire de manières exhaustive et non ambiguë les techniques d'interactions multi-touch et la démonstrabilité de son application dans le cadre des applications multi-touch civils. Nous proposons en plus de cette notation, une méthode pour la conception et la validation de systèmes interactifs offrants des interactions multi-touch à leurs utilisateurs. Le fonctionnement de ces systèmes interactifs est basé sur une architecture générique permettant une structuration des modèles allant de la partie matérielle des périphériques d'entrées jusqu' à la partie applicative pour la commande et le contrôle de ces systèmes. Cet ensemble de contribution est appliqué sur un ensemble d'étude de ca dont la plus significative est une application de gestion météo pour un avion civil.With the advent of new technologies such as the iPad, general public software feature richer and more innovative interfaces. These innovations are both on the input layer (e.g. multi-touch screens) and on the output layer (e.g. display). These interfaces are categorized as post-W.I.M.P. type and allow to increase the bandwidth between the user and the system he manipulates. Specifically it allows the user to more quickly deliver commands to the system and the system to present more information to the user enabling him managing increasingly complex systems. The large use in the general public and the level of maturity of these technologies allows to consider their integration in critical systems (such as cockpits or more generally control and command systems). However, the software issues related to these technologies are far from being resolved judging by the many problems encountered by users. While the latter may be tolerated for gaming applications and entertainment, it is not acceptable in the field of critical systems described above. The problem of this thesis focuses specifically on the development of methods, languages, techniques and tools for the design and development of innovative and reliable interactive systems. The contribution of this thesis is the extension of a formal notation: ICO (Interactive Cooperative Object) to describe in a complete and unambiguous way multi-touch interaction techniques and is applied in the context of multi-touch applications for civilians aircrafts. We provide in addition to this notation, a method for the design and validation of interactive systems featuring multi-touch interactions. The mechanisms of these interactive systems are based on a generic architecture structuring models from the hardware part of the input devices up to the application part for the control and monitoring of these systems. This set of contribution is applied on a set of case studies, the most significant being an application for weather management in civilian aircrafts

    Extending the Vocabulary of Touch Events with ThumbRock

    Get PDF
    International audienceCompared with mouse-based interaction on a desktop interface, touch-based interaction on a mobile device is quite limited: most applications only support tapping and dragging to perform simple gestures. Finger rolling provides an alternative to tapping but uses a recognition process that relies on either per-user calibration, ex- plicit delimiters or extra hardware, making it difficult to integrate into current touch-based mobile devices. This paper introduces ThumbRock, a ready-to-use micro gesture that consists in rolling the thumb back and forth on the touchscreen. Our algorithm rec- ognizes ThumbRocks with more than 96% accuracy without cali- bration nor explicit delimiter by analyzing the data provided by the touch screen with a low computational cost. The full trace of the gesture is analyzed incrementally to ensure compatibility with other events and to support real-time feedback. This also makes it possible to create a continuous control space as we illustrate with our MicroSlider, a 1D slider manipulated with thumb rolling gestures
    corecore