28 research outputs found

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    Designing Efficient and Customizable Radar-based Gesture Interfaces

    No full text
    Radar sensors have many advantages compared to traditional vision-based sensors for gesture recognition. They work in poor lighting and weather conditions, come with less privacy concerns than vision-based sensors, and can be integrated into everyday objects. However, due to the size and complexity of radar data, most radar-based systems rely on deep-learning techniques for gesture recognition. These systems take time to train and make it challenging to support quick customization by the user, such as changing the gesture set of an application. In this thesis, we want to investigate whether we can create efficient and customizable radar-based gesture interfaces, by reducing the size of raw radar data and relying on simple template matching algorithms for gesture recognition. We have already implemented a pipeline that handles these steps and tested its performance on a dataset of 20 gestures performed by three participants in front of a cheap, off-the-shelf FMCW radar. The next steps include developing a software environment for testing recognition techniques on radar gestures, optimizing our pipeline for real-time gesture recognition, and investigating two new use cases: environments where the radar is obstructed by some materials (wood, glass, and PVC) and breathing patterns recognition

    Radar HGR Dataset

    No full text
    A small dataset of 16 hand gestures recorded with two different radars and a Leap Motion Controller

    Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes

    No full text
    Microwave radar sensors in human-computer interaction promote several advantages over wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and very complex to process and interpret for gesture recognition. For these reasons, machine learning techniques have been mainly used for gesture recognition, but require a significant amount of gesture templates for training and calibration that are specific for each radar. To address these challenges in the context of mid-air gesture interaction, we introduce a data processing pipeline for hand gesture recognition adopting a model-based approach that combines full-wave electromagnetic modeling and inversion. Thanks to this model, gesture recognition is reduced to handling two dimensions: the hand-radar distance and the relative dielectric permittivity, which depends on the hand only (e.g., size, surface, electric properties, orientation). We are developing a software environment that accommodates the significant stages of our pipeline towards final gesture recognition. We already tested it on a dataset of 16 gesture classes with 5 templates per class recorded with the Walabot, a lightweight, off-the-shelf array radar. We are now studying how user-defined radar gestures resulting from gesture elicitation studies could be properly recognized or not by our gesture recognition engine

    An integrated development environment for gesture-based interactive applications : instantiation to radar gestures

    No full text
    Radar sensors combine a small form factor and low power consumption with the ability to sense gestures through opaque surfaces and under unfavorable lighting conditions, all while preserving user privacy. These unique advantages could make them a credible and useful alternative to other types of sensors, such as cameras or inertial sensors, for gesture recognition. However, research on radar-based gestural interfaces is in its infancy and many challenges that hinder their seamless integration and widespread adoption still must be solved. In particular, the lack of standardization translates into a plethora of custom radar sensors and techniques, preventing efficient collaboration between developers. In addition, radar signals can be noisy and complex to analyze, and do not transpose well from one radar to another. This thesis investigates and advances the state of radar-based gesture interaction in two stages. First, it explores the use of radar sensors for gesture recognition by reporting results from a targeted literature review and two systematic literature reviews, unveiling a large variety of radar sensors, gesture sets, and gesture recognition techniques, as well as the many challenges of real-time gesture recognition. Second, it provides tools and methods that facilitate the development of highly usable radar-based gesture interfaces by bridging the gap between researchers and practitioners. In particular, it introduces QuantumLeap, a modular framework for gesture recognition that acts as an intermediate layer between (radar) sensors and gesture-based applications and facilitates the performance and efficiency evaluation of gesture recognizers. Additionally, it proposes a user-centered development method for gesture-based applications and applies it to a multimedia application. Finally, it introduces and evaluates a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are independent of the source, antennas, and radar-hand interactions.(FSA - Sciences de l'ingénieur) -- UCL, 202

    FORTE: Few Samples for Recognizing Hand Gestures on a Smartphone-attached Radar

    No full text
    A dataset of 20 hand, arm, and body gestures recorded with an off-the-shelf radar (Walabot Developer EU/CE)

    Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and Inversion

    No full text
    Microwave radar sensors in human-computer interactions have several advantages compared to wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and relatively complex to interpret. Advanced data processing, including machine learning techniques, is therefore necessary for gesture recognition. While these approaches can reach high gesture recognition accuracy, using artificial neural networks requires a significant amount of gesture templates for training and calibration is radar-specific. To address these challenges, we present a novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling and inversion with machine learning. In particular, the physical model accounts for the radar source, radar antennas, radar-target interactions and target itself, i.e., the hand in our case. To make this processing feasible, the hand is emulated by an equivalent infinite planar reflector, for which analytical Green’s functions exist. The apparent dielectric permittivity, which depends on the hand size, electric properties, and orientation, determines the wave reflection amplitude based on the distance from the hand to the radar. Through full-wave inversion of the radar data, the physical distance as well as this apparent permittivity are retrieved, thereby reducing by several orders of magnitude the dimension of the radar dataset, while keeping the essential information. Finally, the estimated distance and apparent permittivity as a function of gesture time are used to train the machine learning algorithm for gesture recognition. This physically-based dimension reduction enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples. We evaluate significant stages of our pipeline on a dataset of 16 gesture classes, with 5 templates per class, recorded with the Walabot, a lightweight, off-the-shelf array radar. We also compare these results with an ultra wideband radar made of a single horn antenna and lightweight vector network analyzer, and a Leap Motion Controller

    Engineering Slidable Graphical User Interfaces with Slime

    No full text
    Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task

    SnappView, A Software Development Kit for Supporting End-user Mobile Interface Review

    No full text
    This paper presents SnappView, an open-source software development kit that facilitates end-user review of graphical user interfaces for mobile applications and streamlines their input into a continuous design life cycle. \sv structures this user interface review process into four cumulative stages: (1) a developer creates a mobile application project with user interface code instrumented by only a few instructions governing \sv and deploys the resulting application on an application store; (2) any tester, such as an end-user, a designer, a reviewer, while interacting with the instrumented user interface, shakes the mobile device to freeze and capture its screen and to provide insightful multimodal feedback such as textual comments, critics, suggestions, drawings by stroke gestures, voice or video records, with a level of importance; (3) the screenshot is captured with the application, browser, and status data and sent with the feedback to SnappView server; and (4) a designer then reviews collected and aggregated feedback data and passes them to the developer to address raised usability problems. Another cycle then initiates an iterative design. This paper presents the motivations and process for performing mobile application review based on SnappView. Based on this process, we deployed on the AppStore ``WeTwo'', a real-world mobile application to find various personal activities over a one-month period with 420 active users. This application served for a user experience evaluation conducted with N1=14 developers to reveal the advantages and shortcomings of the toolkit from a development point of view. The same application was also used in a usability evaluation conducted with N2=22 participants to reveal the advantages and shortcomings from an end-user viewpoint
    corecore