1,888 research outputs found

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    A novel user-based gesture vocabulary for conceptual design

    Get PDF
    Research into hand gestures for human computer interaction has been prolific recently, but within it research on hand gestures for conceptual design has either focused on gestures that were defined by the researchers rather than the users, or those that were heavily influenced by what can be achieved using currently available technology. This paper reports on the study performed to identify a user elicited vocabulary of gestures for conceptual design, disassociated from the currently available technology, and its subsequent evaluation. The study included 44 product design engineering students (3rd, 4th year and recent graduates) and identified 1772 gestures that were analysed to build a novel gesture consensus set of vocabulary of hand gestures for conceptual design. This set is then evaluated by 10 other professionals, in order to generalise this set for a wider range of users and possibly reduce the need for training. The evaluation has shown that majority of gestures added to the vocabulary were easy to perform and appropriate for the activities, but that at the implementation stage the vocabulary will require another round of evaluation to account for the technology capabilities. The aim of this work is to create a starting point for a potential future system that could adapt to individual designers and allow them to use non-prescribed gestures that will support rather than inhibit their conceptual design thinking processes, akin to the developments that happened in hand writing recognition or predictive texting

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    Lalang, Zes ek Kiltir - Multimodal Reference Marking in Kreol Seselwa

    Get PDF
    This dissertation is the first cross-disciplinary study of the interaction of speech, gesture and culture in a Creolophone community. Focusing on the multimodal strategies of person and spatial reference of Kreol Seselwa (KS), it combines theoretical and methodological approaches from Creolistics, Gesture Studies, and Anthropological Linguistics. It constitutes a holistic analysis of communicative interaction in the very special linguistic, sociohistorical and sociocultural environment of the Seychelles. The overall hypothesis is that reference is an inherently dynamic process involving three levels: (1) the linguistic forms available in both speech and gesture, (2) the mobilisation of these forms in situated communicative interaction, and (3) embedding these forms and strategies in a micro-ecology of communication specific to the Seychelles. After introducing key theoretical notions of the study of Reference, Creole languages, Gesture, and Anthropological Linguistics, the analysis starts with the first level – gestural and spoken form features of KS. Combining previous work on KS with data from my own corpus I describe the lexical and grammatical features relevant to reference in this languages, such as the article system, number marking and the occurrence of bare nouns. Also the form features of KS gestures are presented, some of which already show differences in person and spatial reference. In a second step, the study analyses the mobilisation of these reference forms in communicative interaction. The close interaction between the two modalities is demonstrated in both spatial and person reference. In spatial reference, it is shown that gesture and speech complement each other in the construction of figure-ground arrays. Furthermore, the absolute frame of reference tends to be expressed in the gestural modality while the relative frame of reference is conveyed in speech. In person reference, I provide evidence that in KS the preferences for recognition and association are ranked higher than the preference for minimisation. The high level of context-dependency and the role of information structure in KS person and spatial reference is further illustrated with multimodal examples. In a third step, the patterns of multimodal reference marking are embedded in a micro-ecology of communication specific to the Seychelles. It is argued that geographic, sociocultural and sociohistorical aspects of this Postcolonial society are reflected in the strategies of referring to individuals and locations. A focus is set on the factors of shared cultural knowledge, hybridity and flexibility. Finally, I discuss the implications of the results for the nature of gesture as well as the nature of reference, leading to the conclusion that reference is indeed a multimodal and dynamic process that involves not only static reference forms but is actively constructed in a communicative interaction that is embedded in a micro-ecology of communication

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden Qualität maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primär Text zu generieren, müssen Übersetzer nun Fehler in ansonsten hilfreichen Übersetzungsvorschlägen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mühsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere Interaktionsmodalitäten als Maus und Tastatur das PE unterstützen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder über Handgesten zu verändern, Wörter per Drag & Drop neu anzuordnen oder all diese Eingabemodalitäten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschätzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen für das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine Domäne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    Gesture as a Communication Strategy in Second Language Discourse : A Study of Learners of French and Swedish

    Get PDF
    Gesture is always mentioned in descriptions of compensatory behaviour in second language discourse, yet it has never been adequately integrated into any theory of Communication Strategies (CSs). This study suggests a method for achieving such an integration. By combining a cognitive theory of speech-associated gestures with a process-oriented framework for CSs, gesture and speech can be seen as reflections of similar underlying processes with different output modes. This approach allows oral and gestural CSs to be classified and analysed within a unified framework. The respective fields are presented in introductory surveys, and a review is provided of studies dealing specifically with compensatory gesture–in aphasia as well as in first and second language acquisition. The experimental part of this work consists of two studies. The production study examines the gestures exploited strategically by Swedish learners of French and French learners of Swedish. The subjects retold a cartoon story in their foreign language to native speakers in conversational narratives. To enable comparisons between learners and proficiency conditions both at individual and group level, subjects performed the task in both their first and their second language. The results show that, contrary to expectations in both fields, strategic gestures do not replace speech, but complement it. Moreover, although strategic gestures are used to solve lexical problems by depicting referential features, most learner gestures instead serve either to maintain visual co-reference at discourse level, or to provide metalinguistic comments on the communicative act itself. These latter functions have hitherto been ignored in CS research. Both similarities and differences can be found between oral and gestural CSs regarding the effect of proficiency, culture, task, and success. The influence of individual communicative style and strategic communicative competence is also discussed. Finally, native listeners’ gestural behaviour is shown to be related to the co-operative effort invested by them to ensure continued interaction, which in turn depends on the proficiency levels of the non-native narrators. The evaluation study investigates native speakers’ assessments of subjects’ gestures, and the effect of gestures on evaluations of proficiency. Native speakers rank all subjects as showing normal or reduced gesture rates and ranges–irrespective of proficiency condition. The influence of gestures on proficiency assessments is modest, but tends to be positive. The results concerning the effectiveness of gestural strategies are inconclusive, however. When exposed to auditory learner data only, listeners believe gestures would improve comprehension, but when learner gestures can be seen, they are not regarded as helpful. This study stresses the need to further examine the effect of strategic behaviour on assessments, and the perception of gestures in interaction. An integrated theory of Communication Strategies has to consider that gestures operate in two ways: as local measures of communicative ‘first-aid’, and as global communication enhancement for speakers and listeners alike. A probabilistic framework is outlined, where variability in performance as well as psycholinguistic and interactional aspects of gesture use are taken into account

    Professors and students’ perceptions towards oral corrective feedback in an english language teaching program

    Get PDF
    El propósito de este estudio fue resaltar las percepciones que tienen los profesores y estudiantes de un programa de licenciatura en lengua inglesa hacia la retroalimentación correctiva en clases de lengua. El estudio fue llevado a cabo en una universidad pública de la ciudad de Pereira en Colombia, en el cual participaron 7 profesores entre hombres y mujeres, como también 15 estudiantes del programa de diferentes sexos a los que se les aplicaron entrevistas individuales. Diferentes observaciones, entrevistas y cuestionarios virtuales fueron usados como métodos de recolección de datos con el propósito de obtener evidencias de los eventos de clase y las percepciones de los profesores y estudiantes. La pregunta que orientó esta investigación fue: ¿qué se puede decir acerca de las percepciones de los profesores y las actitudes de los estudiantes hacia la retroalimentación correctiva oral dada en cursos de lengua en un programa de Licenciatura en Lengua Inglesa en Pereira? Los resultados obtenidos indican que aunque los profesores mostraron ser conscientes de la importancia de la retroalimentación correctiva para mejorar la habilidad del habla, la provisión de ésta no se da de una manera consciente, además los resultados muestran la preocupación que los profesores tienen por los efectos negativos que la retroalimentación corrección puede producir en los estudiantes. Como parte final, este estudio quiso demostrar la importancia que tiene la retroalimentación correctiva en la preparación académica de los futuros profesores de inglés con el propósito de ayudar a sus futuros aprendices a mejorar la competencia del habla dentro de un ambiente colaborativo y amigable

    Investigating User Experience Using Gesture-based and Immersive-based Interfaces on Animation Learners

    Get PDF
    Creating animation is a very exciting activity. However, the long and laborious process can be extremely challenging. Keyframe animation is a complex technique that takes a long time to complete, as the procedure involves changing the poses of characters through modifying the time and space of an action, called frame-by-frame animation. This involves the laborious, repetitive process of constantly reviewing results of the animation in order to make sure the movement-timing is accurate. A new approach to animation is required in order to provide a more intuitive animating experience. With the evolution of interaction design and the Natural User Interface (NUI) becoming widespread in recent years, a NUI-based animation system is expected to allow better usability and efficiency that would benefit animation. This thesis investigates the effectiveness of gesture-based and immersive-based interfaces as part of animation systems. A practice-based element of this research is a prototype of the hand gesture interface, which was created based on experiences from reflective practices. An experimental design is employed to investigate the usability and efficiency of gesture-based and immersive-based interfaces in comparison to the conventional GUI/WIMP interface application. The findings showed that gesture-based and immersive-based interfaces are able to attract animators in terms of the efficiency of the system. However, there was no difference in their preference for usability with the two interfaces. Most of our participants are pleasant with NUI interfaces and new technologies used in the animation process, but for detailed work and taking control of the application, the conventional GUI/WIMP is preferable. Despite the awkwardness of devising gesture-based and immersive-based interfaces for animation, the concept of the system showed potential for a faster animation process, an enjoyable learning system, and stimulating interest in a kinaesthetic learning experience
    corecore