941 research outputs found

    Usability of vision-based interfaces

    Get PDF
    Vision-based interfaces can employ gestures to interact with an interactive system without touching it. Gestures are frequently modelled in laboratories, and usability testing should be carried out. However, often these interfaces present usability issues, and the great diversity of uses of these interfaces and the applications where they are used, makes it difficult to decide which factors to take into account in a usability test. In this paper, we review the literature to compile and analyze the usability factors and metrics used for vision-based interfaces.Postprint (published version

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax

    Tracking Down the Intuitiveness of Gesture Interaction in the Truck Domain

    Get PDF
    AbstractTouchless hand gesture control potentially leads to a safer, more comfortable and more intuitive Human Vehicle Interaction (HVI) if relevant ergonomic requirements are met. To achieve intuitive interaction and thus to favor user acceptance, the gesture interface should be conform to user expectation and enable the users to apply their prior knowledge. This particularly concerns the gestures used for input. The conducted experiment investigates which gestures subjects tend to use for various functions of a truck and how these gestures are affected by the subjects’ prior knowledge. In total, 17 potential functions were considered for this purpose. Within the experiment, 74 subjects performed gestures for each of these functions while being recorded on video. The video data shows a variety of gestures differing in hand pose, execution space, and palm orientation. Nevertheless, several interindividual similarities in gesturing can be observed, which made it possible to analyze the gestures in terms of the prior knowledge applied. The results show that gestures differ according to the sources of prior knowledge like culture and instincts. Depending on the function, the gestures observed within the experiment are based on gestures of quasi-direct manipulation, emblematic gestures, instinctive gestures, standardized gestures and gestures expressing the users’ mental model. However, the applicability of these gestures is limited by capabilities of gesture recognition and is depending on how the user interface will be designed

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    A Systematic Review of User Mental Models on Applications Sustainability

    Get PDF
    In Human-Computer Interaction (HCI), a user’s mental model affects application sustainability. This study's goal is to find and assess previous work in the area of user mental models and how it relates to the sustainability of application. Thus, a systematic review process was used to identify 641 initial articles, which were then screened based on inclusion and exclusion criteria. According to the review, it has been observed that the mental model of a user has an impact on the creation of applications not only within the domain of Human-Computer Interaction (HCI), but also in other domains such as Enterprise Innovation Ecology, Explainable Artificial Intelligence (XAI), Information Systems (IS), and various others. The examined articles discussed company managers' difficulties in prioritising innovation and ecology, and the necessity to understand users' mental models to build and evaluate intelligent systems. The reviewed articles mostly used experimental, questionnaire, observation, and interviews, by applying either qualitative, quantitative, or mixed-method methodologies. This study highlights the importance of user mental models in application sustainability, where developers may create apps that suit user demands, fit with cognitive psychology principles, and improve human-AI collaboration by understanding user mental models. This study also emphasises the importance of user mental models in the long-term viability and sustainability of applications, and provides significant insights for application developers and researchers in building more user-centric and sustainable applications

    Multi-Level Representation of Gesture as Command for Human Computer Interaction

    Get PDF
    oai:ojs.cai.ui.sav.sk:article/16The paper addresses the multiple forms of representation that human gesture takes at different levels for human computer interaction, ranging from gesture acquisition to mathematical model for analysis, pattern for recognition, record for database up to end-level application event triggers. A mathematical model for gesture as command is presented. We equally identify and provide particular models for four different types of gestures by considering both posture information and underlying motion trajectories. The problem of constructing gesture dictionaries is further addressed by taking into account similarity measures and dictionary discriminative features

    The Gestural Control of Audio Processing

    Get PDF
    Gesture enabled devices have become so ubiquitous in recent years that commands such as ‘pinch to zoom-in on an image’ are part of most people’s gestural vocabulary. Despite this, gestural interfaces have been used sparingly within the audio industry. The aim of this research project is to evaluate the effectiveness of a gestural interface for the control of audio processing. In particular, the ability of a gestural system to streamline workflow and rationalise the number of control parameters, thus reducing the complexity of Human Computer Interaction (HCI). A literature review of gestural technology explores the ways in which it can improve HCI, before focussing on areas of implementation in audio systems. Case studies of previous research projects were conducted to evaluate the benefits and pitfalls of gestural control over audio. The findings from these studies concluded that the scope of this project should be limited to two-dimensional gestural control. An elicitation of gestural preferences was performed to identify expert-user’s gestural associations. This data was used to compile a taxonomy of gestures and their most widely-intuitive parameter mappings. A novel interface was then produced using a popular tablet-computer. This facilitated the control of equalisation, compression and gating. Objective testing determined the performance of the gestural interface in comparison to traditional WIMP (Windows, Icons, Menus, Pointer) techniques, thus producing a benchmark for the system under test. Further testing is carried out to observe the effects of graphic user interfaces (GUIs) in a gestural system, in particular the suitability of skeuomorphic (knobs and faders) designs in modern DAWs (Digital Audio Workstations). A novel visualisation method, deemed more suitable for gestural interaction, is proposed and tested. Semantic descriptors are explored as a means of further improving the speed and usability of gestural interfaces, through the simultaneous control of multiple parameters. This rationalisation of control moves towards the implementation of gestural shortcuts and ‘continuous pre-sets’

    Using a Bayesian Framework to Develop 3D Gestural Input Systems Based on Expertise and Exposure in Anesthesia

    Get PDF
    Interactions with a keyboard and mouse fall short of human capabilities and what is lacking in the technological revolution is a surge of new and natural ways of interacting with computers. In-air gestures are a promising input modality as they are expressive, easy to use, quick to use, and natural for users. It is known that gestural systems should be developed within a particular context as gesture choice is dependent on the context; however, there is little research investigating other individual factors which may influence gesture choice such as expertise and exposure. Anesthesia providers’ hands have been linked to bacterial transmission; therefore, this research investigates the context of gestural technology for anesthetic task. The objective of this research is to understand how expertise and exposure influence gestural behavior and to develop Bayesian statistical models that can accurately predict how users would choose intuitive gestures in anesthesia based on expertise and exposure. Expertise and exposure may influence gesture responses for individuals; however, there is limited to no work investigating how these factors influence intuitive gesture choice and how to use this information to predict intuitive gestures to be used in system design. If researchers can capture users’ gesture variability within a particular context based on expertise and exposure, then statistical models can be developed to predict how users may gesturally respond to a computer system and use those predictions to design a gestural system which anticipates a user’s response and thus affords intuitiveness to multiple user groups. This allows designers to more completely understand the end user and implement intuitive gesture systems that are based on expected natural responses. Ultimately, this dissertation seeks to investigate the human factors challenges associated with gestural system development within a specific context and to offer statistical approaches to understanding and predicting human behavior in a gestural system. Two experimental studies and two Bayesian analyses were completed in this dissertation. The first experimental study investigated the effect of expertise within the context of anesthesiology. The main finding of this study was that domain expertise is influential when developing 3D gestural systems as novices and experts differ in terms of intuitive gesture-function mappings as well as reaction times to generate an intuitive mapping. The second study investigated the effect of exposure for controlling a computer-based presentation and found that there is a learning effect of gestural control in that participants were significantly faster at generating intuitive mappings as they gained exposure with the system. The two Bayesian analyses were in the form of Bayesian multinomial logistic regression models where intuitive gesture choice was predicted based on the contextual task and either expertise or exposure. The Bayesian analyses generated posterior predictive probabilities for all combinations of task, expertise level, and exposure level and showed that gesture choice can be predicted to some degree. This work provides further insights into how 3D gestural input systems should be designed and how Bayesian statistics can be used to model human behavior

    End-user action-sound mapping design for mid-air music performance

    Get PDF
    How to design the relationship between a performer’s actions and an instrument’s sound response has been a consistent theme in Digital Musical Instrument (DMI) research. Previously, mapping was seen purely as an activity for DMI creators, but more recent work has exposed mapping design to DMI musicians, with many in the field introducing soware to facilitate end-user mapping, democratising this aspect of the DMI design process. This end-user mapping process provides musicians with a novel avenue for creative expression, and offers a unique opportunity to examine how practising musicians approach mapping design.Most DMIs suffer from a lack of practitioners beyond their initial designer, and there are few that are used by professional musicians over extended periods. The Mi.Mu Gloves are one of the few examples of a DMI that is used by a dedicated group of practising musicians, many of whom use the instrument in their professional practice, with a significant aspect of creative practice with the gloves being end-user mapping design. The research presented in this dissertation investigates end-user mapping practice with the Mi.Mu Gloves, and what influences glove musicians’ design decisions based on the context of their music performance practice, examining the question: How do end-users of a glove-based mid-air DMI design action–sound mapping strategies for musical performance?In the first study, the mapping practice of existing members of the Mi.Mu Glove community is examined. Glove musicians performed a mapping design task, which revealed marked differences in the mapping designs of expert and novice glove musicians, with novices designing mappings that evoked conceptual metaphors of spatial relationships between movement and music, while more experienced musicians focused on designing ergonomic mappings that minimised performer error.The second study examined the initial development period of glove mapping practice. A group of novice glove musicians were tracked in a longitudinal study. The findings supported the previous observation that novices designed mappings using established conceptual metaphors, and revealed that transparency and the audience’s ability to perceive their mappings was important to novice glove musicians. However, creative mapping was hindered by system reliability and the novices’ poorly trained posture recognition.The third study examined the mapping practice of expert glove musicians, who took part in a series of interviews. Findings from this study supported earlier observations that expert glove musicians focus on error minimisation and ergonomic, simple controls, but also revealed that the expert musicians embellished these simple controls with performative ancillary gestures to communicate aesthetic meaning. The expert musicians also suffered from system reliability, and had developed a series of gestural techniques to mitigate accidental triggering.The fourth study examined the effects of system-related error in depth. A laboratory study was used to investigate how system-related errors impacted a musician’s ability to acquire skill with the gloves, finding that a 5% rate of system error had a significant effect on skill acquisition.Learning from these findings, a series of design heuristics are presented, applicable for use in the fields of DMI design, mid-air interaction design and end-user mapping design
    • …
    corecore