10,624 research outputs found

    Understanding Public Evaluation: Quantifying Experimenter Intervention

    Get PDF
    Public evaluations are popular because some research questions can only be answered by turning “to the wild.” Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our understanding of how experimenter roles impact public evaluations and provides an empirical basis to consider different evaluation approaches. We completed an evaluation of a playful gesture-controlled display – not to understand interaction at the display but to compare different evaluation approaches. The conditions placed the experimenter in three roles, steward observer, overt observer, and covert observer, to measure the effect of experimenter presence and analyse the strengths and weaknesses of each approach

    Kinect-ed Piano

    Get PDF
    We describe a gesturally-controlled improvisation system for an experimental pianist, developed over several laboratory sessions and used during a performance [1] at the 2011 Conference on New Inter- faces for Musical Expression (NIME). We discuss the architecture and performative advantages and limitations of our gesturally-controlled improvisation system, and reflect on the lessons learned throughout its development. KEYWORDS: piano; improvisation; gesture recognition; machine learning

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.Comment: [Accepted version] In Proceedings of CHI Conference on Human Factors in Computing Systems (CHI '21), May 8-13, 2021, Yokohama, Japan. ACM, New York, NY, USA. 19 Pages. https://doi.org/10.1145/3411764.344557

    Designing constraints: composing and performing with digital musical systems

    Get PDF
    This paper investigates two central terms in Human Computer Interaction (HCI) – affordances and constraints – and studies their relevance to the design and understanding of digital musical systems. It argues that in the analysis of complex systems, such as new interfaces for musical expression (NIME), constraints are a more productive analytical tool than the common HCI usage of affordances. Constraints are seen as limitations enabling the musician to encapsulate a specific search space of both physical and compositional gestures, proscribing complexity in favor of a relatively simple set of rules that engender creativity. By exploring the design of three different digital musical systems, the paper defines constraints as a core attribute of mapping, whether in instruments or compositional systems. The paper describes the aspiration for designing constraints as twofold: to save time, as musical performance is typically a real-time process, and to minimize the performer’s cognitive load. Finally, it discusses skill and virtuosity in the realm of new musical interfaces for musical expression with regard to constraints
    corecore