19 research outputs found

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    A software suite supporting the design of gesture elicitation studies

    No full text
    How can we provide designers and developers with some support to identify the most appropriate gestures for gestural user interfaces depending on their context of use? To address this research question, we develop GEStory and GESistant. To feed GEStory and implement GESistant, we have conducted two systematic literature reviews (SLR) of Gesture Elicitation Study (GES): a macroscopic analysis of 216 papers on their metadata, such as authors, definitions, year of publication, type of publication, participants, referents, parts of the body (finger, hand, wrist, arm, head, leg, foot, and Whole body), number of proposed gestures; a microscopic analysis of 267 papers analyzing and classifying the referents, the final gestures coming out the consensus set, their representation, and characterization. We also propose an assessment of the credibility of these studies as a measure for categorizing their strength of impact. Based on the information analyzed in our SLR, we identify opportunities for new studies focused on gesture elicitation with end users. As a result, we present our new GES that contributes to the literature. GEStory acts as an interactive design space for gestural interaction to inform researchers and practitioners about existing preferred gestures in different contexts of use, and enable the identification of gaps and opportunities for new studies. GESistant, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into the GES workflow.(FSA - Sciences de l'ingénieur) -- UCL, 202

    Analysis of User-defined Radar-based Hand Gestures Sensed through Multiple Materials

    No full text
    Radar sensing can penetrate non-conducting materials, such as glass, wood, and plastic, which makes it appropriate for recognizing gestures in environments with poor visibility, limited accessibility, and privacy sensitivity. While the performance of radar-based gesture recognition in these environments has been extensively researched, the preferences that users express for these gestures are less known. To analyze such gestures simultaneously according to their user preference and their system recognition performance, we conducted three gesture elicitation studies each with 30 participants to identify user-defined, radar-based gestures sensed through three distinct materials: the glass of a shop window, the wood of an office door, and polyvinyl chloride in an emergency. On this basis, we created a dataset of nine selected gesture classes for 20 participants repeating twice the same gesture captured by radar through three materials, i.e., glass, wood, and polyvinyl chloride. To compare recognition rates in these conditions with sensing variations, a specifically tailored procedure was defined and conducted with one-shot radar calibration to train and evaluate a gesture recognizer. ‘Wood’ achieved the best recognition rate (96.44%), followed by ‘Polyvinyl chloride’ and ‘Glass’. We perform a preference-performance analysis of the gestures by combining the agreement rate from the elicitation studies and the recognition rate from the evaluation

    A Gesture Elicitation Study of Nose-Based Gestures

    No full text
    Presently, miniaturized sensors can be embedded in any small-size wearable to recognize movements on some parts of the human body. For example, an electrooculography-based sensor in smart glasses recognizes finger movements on the nose. To explore the interaction capabilities, this paper conducts a gesture elicitation study as a between-subjects experiment involving one group of 12 females and one group of 12 males, expressing their preferred nose-based gestures on 19 Internet-of-Things tasks. Based on classification criteria, the 912 elicited gestures are clustered into 53 unique gestures resulting in 23 categories, to form a taxonomy and a consensus set of 38 final gestures, providing researchers and practitioners with a larger base with six design guidelines. To test whether the measurement method impacts these results, the agreement scores and rates, computed for determining the most agreed gestures upon participants, are compared with the Condorcet and the de Borda count methods to observe that the results remain consistent, sometimes with a slightly different order. To test whether the results are sensitive to gender, inferential statistics suggest that no significant difference exists between males and females for agreement scores and rate

    CROSSIDE: A Cross-Surface Collaboration by Sketching Design Space

    No full text
    This paper introduces, motivates, defines, and exemplifies CROSSIDE, a design space for representing capabilities of a software for collaborative sketching in a cross-surface setting, i.e., when stakeholders are interacting with and across multiple interaction surfaces, ranging from low end devices such as smartwatches, mobile phones to high-end devices like wall displays. By determining the greatest common denominator in terms of system properties between forty-one references, the design space is structured according to seven dimensions: user configurations, surface configurations, input interaction techniques, work methods, tangibility, and device configurations. This design space is aimed at satisfying three virtues: descriptive (i.e., the ability to systematically describe any particular work in cross-surface interaction by sketching), comparative (i.e., the ability to consistently compare two or more works belonging to this area), and generative (i.e., the ability to generate new ideas by identifying potentially interesting, undercovered areas). A radar diagram graphically depicts the design space for these three virtues

    Analysis of User-Defined Radar-Based Hand Gestures Sensed Through Multiple Materials

    No full text
    Radar sensing can penetrate non-conducting materials, such as glass, wood, and plastic, which makes it appropriate for recognizing gestures in environments with poor visibility, limited accessibility, and privacy sensitivity. While the performance of radar-based gesture recognition in these environments has been extensively researched, the preferences that users express for these gestures are less known. To analyze such gestures simultaneously according to their user preference and their system recognition performance, we conducted three gesture elicitation studies each with n1=30n_{1}{=}30 participants to identify user-defined, radar-based gestures sensed through three distinct materials: the glass of a shop window, the wood of an office door, and polyvinyl chloride in an emergency. On this basis, we created a new dataset of nine selected gesture classes for n2=20n_{2}{=}20 participants repeating twice the same gesture captured by radar through three materials, i.e., glass, wood, and polyvinyl chloride. To uniformly compare recognition rates in these conditions with sensing variations, a specifically tailored procedure was defined and conducted with one-shot radar calibration to train and evaluate a gesture recognizer. ‘Wood’ achieved the best recognition rate (96.44%), followed by ‘Polyvinyl chloride’ and ‘Glass’. We perform a preference-performance analysis of the gestures by combining the agreement rate from the elicitation studies and the recognition rate from the evaluation

    CROSSIDE: A Design Space for Characterizing Cross-Surface Collaboration by Sketching

    No full text
    This paper introduces, motivates, defines, and exemplifies CROSSIDE, a design space for representing capabilities of a software for collaborative sketching in a cross-surface setting, i.e., when stakeholders are interacting with and across multiple interaction surfaces, ranging from low-end devices such as smartwatches, mobile phones to high-end devices like wall displays. By determining the greatest common denominator in terms of system properties between forty-one references, the design space is structured according to seven dimensions: user configurations, surface configurations, input interaction techniques, work methods, tangibility, and device configurations. This design space is aimed at satisfying three virtues: descriptive (i.e., the ability to systematically describe any particular work in cross-surface interaction by sketching), comparative (i.e., the ability to consistently compare two or more works belonging to this area), and generative (i.e., the ability to generate new ideas by identifying potentially interesting, under covered areas). A radar diagram graphically depicts the design space for these three virtues to enable a visual representation of one or more instances

    Exploring user-defined gestures for lingual and palatal interaction

    No full text
    Individuals with motor disabilities can benefit from an alternative means of interacting with the world: using their tongue. The tongue possesses precise movement capabilities within the mouth, allowing individuals to designate targets on the palate. This form of interaction, known as lingual interaction, enables users to perform basic functions by utilizing their tongues to indicate positions. The purpose of this work is to identify the lingual and palatal gestures proposed by end-users. In order to achieve this goal, our initial step was to examine relevant literature on the subject, including clinical studies on the motor capacity of the tongue, devices detecting the movement of the tongue, and current lingual interfaces (e.g., using a wheelchair). Then, we conducted a Gesture Elicitation Study (GES) involving twenty-four (N = 24) participants, who proposed lingual and palatal gestures to perform nineteen (19) Internet of Things (IoT) referents, thus obtaining a corpus of 456 gestures. These gestures were clustered into similarity classes (80 unique gestures) and analyzed by dimension, nature, complexity, thinking time, and goodness-of-fit. Using the Agreement Rate methodology, we present a set of sixteen (16) gestures for a lingual and palatal interface, which serve as a basis for further comparison with gestures suggested by disabled people

    Theoretically-Defined vs. User-Defined Squeeze Gestures

    No full text
    This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects

    Informing Future Gesture Elicitation Studies for Interactive Applications that Use Radar Sensing

    No full text
    We show how two recently introduced visual tools, RepliGES and GEStory, can be used conjointly to inform possible replications of Gesture Elicitation Studies (GES) with a case study centered on gestures that can be sensed with radars. Starting from a GES identified in GEStory, we employ the dimensions of the RepliGES space to enumerate eight possible ways to replicate that study towards gaining new insights into end user’s preferences for gesture-based interaction for applications that use radar sensor
    corecore