12 research outputs found

    Empirical studies of pen tilting performance in pen-based user interfaces

    Full text link

    Mobile phone interaction techniques for rural economy development - a review

    Get PDF
    Rural communities, especially in developing countries, are often neglected in terms of facilities and services that aid their social and economic development. This is evident even in software development processes, in that these groups of users or potential users’ are often not taken into consideration. The resultant effect is that they may not use it or use it sparingly. The objective of this study is to identify the various researches on interaction techniques and user interface design as a first step to the design of suitable mobile interactions and user interfaces for rural users. This research project is also aimed at socio-economic development and adding value to mobile phone users in Dwesa, a rural community in South Africa. This paper presents a literature survey of interaction techniques and user-interfaces. An analysis of the interaction techniques with respect to their suitability, availability of technologies, user capabilities for implementation in a rural context is discussed. Descriptive statistics of users’ current phones interaction facilities in the rural community which briefly illustrates users’ experiences and capabilities in different interaction modes is also presented.KEY WORDS: Interaction Techniques, Mobile phone, User Interface, ICT, Rural Development

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    Expressy : Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to Touch-based Interactions

    Get PDF
    Expressiveness, which we define as the extent to which rich and complex intent can be conveyed through action, is a vital aspect of many human interactions. For instance, paint on canvas is said to be an expressive medium, because it affords the artist the ability to convey multifaceted emotional intent through intricate manipulations of a brush. To date, touch devices have failed to offer users a level of expressiveness in their interactions that rivals that experienced by the painter and those completing other skilled physical tasks. We investigate how data about hand movement – provided by a motion sensor, similar to those found in many smart watches or fitness trackers – can be used to expand the expressiveness of touch interactions. We begin by introducing a conceptual model that formalizes a design space of possible expressive touch interactions. We then describe and evaluate Expressy, an approach that uses a wrist-worn inertial measurement unit to detect and classify qualities of touch interaction that extend beyond those offered by today’s typical sensing hardware. We conclude by describing a number of sample applications, which demonstrate the enhanced, expressive interaction capabilities made possible by Expressy

    EXTENDING INPUT RANGE THROUGH CLUTCHING : ANALYSIS, DESIGN, EVALUATION AND CASE STUDY

    Get PDF
    Master'sMASTER OF SCIENC

    Contact-sensing Input Device Manipulation and Recall

    Get PDF
    We study a cuboid tangible pen-like input device similar to Vogel and Casiez’s Conte. A conductive 3D-printed Conte device enables touch sensing on a capacitive display, and orientation data from an enclosed inertial measurement unit (IMU) reliably distinguishes all 26 corners, edges, and sides. The device’s size is constrained by hardware required for sensing. We evaluate the impact of size form-factor on manipulation times for contact-to-contact transitions. A controlled experiment logs manipulation times performed with three sizes of 3D printed mock-ups of the device. Computer vision techniques reliably distinguish between all 26 possible contacts, and a resistive touch sensor provides accurate timing information. In addition, a transition to touch input is tested, and a mock-up of a digital pen is included as a baseline comparison. Results show larger devices are faster, contact-to-contact transition time increases with distance between contacts, but transitions to barrel edges can be slower than some end-over-end transitions. A comparison with a pen-shaped baseline indicates no loss in transition speed for most equivalent transitions. Based on our results, we discuss ideal device sizes and improvements to the simple extruded-rectangle form-factor. Subsequently, we evaluate learning and recall of commands located on physical landmarks on the exterior of a 3D tangible input device in comparison with a 2D spatial interface. Each of the 26 contacts is a physical spatial landmark on the exterior of Conte. A pilot study compares command learning and recall for Conte with a 2D grid interface, using small and large commands sets. To facilitate novice learning, an on-screen model of Conte replicates the physical device’s orientation and displays icons representing commands on the corresponding landmarks. Results show there is likely no difference between 2D and 3D spatial interface recall for a small command set and high recall is possible with large command sets. Applications illustrating possible use cases are discussed as well as possible improvements to the on-screen guide based on our results

    Adaptation et réutilisation de squelettes d'animation 3D

    Get PDF
    De nos jours, les squelettes de déformation sont primordiaux pour la production d’animation 3D de qualité. Pour faciliter la tâche de leurs artistes animateurs, la plupart des studios de production essaient de réutiliser le plus possible des squelettes semblables ou même identiques d’un personnage à un autre. Ce travail d’adaptation et de réutilisation, généralement accompli à la main, est une tâche longue et fastidieuse. Ce travail de recherche présente deux techniques pour favoriser la réutilisation. La première est une technique d’adaptation complète de squelette basé sur une correspondance topologique entre le maillage du personnage cible et le squelette qu’on veut lui adapter, alors que la deuxième est une technique de réutilisation et création inspirée d’une interface de croquis. Ces deux techniques ont été validées sur des modèles et squelettes de qualité professionnelle et par des tests ciblés avec des utilisateurs experts

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo
    corecore