532 research outputs found

    The feet in human--computer interaction: a survey of foot-based interaction

    Get PDF
    Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research

    TumTá and Pisada: Digital Dance and Music Instruments Inspired by Popular Brazilian Traditions

    Get PDF
    This paper presents the development process of TumTá, a wearable Digital Dance and Music Instrument that triggers sound samples from foot stomps and Pisada, a dance-enabled MIDI pedalboard. It was developed between 2012 and 2017 for the use of Helder Vasconcelos, a dancer and musician formed by the traditions of Cavalo Marinho and Maracatu Rural from Pernambuco, Brazil. The design of this instrument was inspired by traditional instruments like the Zabumba and by the gestural vocabulary from Cavalo Marinho, to make music and dance at the same time. The development process of this instrument is described in the three prototyping phases conducted by three approaches: building blocks, artisanal, and digital fabrication. We analyze the process of designing digital technology inspired by Brazilian traditions, present the lessons learned, and discuss future works

    From head to toe:body movement for human-computer interaction

    Get PDF
    Our bodies are the medium through which we experience the world around us, so human-computer interaction can highly benefit from the richness of body movements and postures as an input modality. In recent years, the widespread availability of inertial measurement units and depth sensors led to the development of a plethora of applications for the body in human-computer interaction. However, the main focus of these works has been on using the upper body for explicit input. This thesis investigates the research space of full-body human-computer interaction through three propositions. The first proposition is that there is more to be inferred by natural users’ movements and postures, such as the quality of activities and psychological states. We develop this proposition in two domains. First, we explore how to support users in performing weight lifting activities. We propose a system that classifies different ways of performing the same activity; an object-oriented model-based framework for formally specifying activities; and a system that automatically extracts an activity model by demonstration. Second, we explore how to automatically capture nonverbal cues for affective computing. We developed a system that annotates motion and gaze data according to the Body Action and Posture coding system. We show that quality analysis can add another layer of information to activity recognition, and that systems that support the communication of quality information should strive to support how we implicitly communicate movement through nonverbal communication. Further, we argue that working at a higher level of abstraction, affect recognition systems can more directly translate findings from other areas into their algorithms, but also contribute new knowledge to these fields. The second proposition is that the lower limbs can provide an effective means of interacting with computers beyond assistive technology To address the problem of the dispersed literature on the topic, we conducted a comprehensive survey on the lower body in HCI, under the lenses of users, systems and interactions. To address the lack of a fundamental understanding of foot-based interactions, we conducted a series of studies that quantitatively characterises several aspects of foot-based interaction, including Fitts’s Law performance models, the effects of movement direction, foot dominance and visual feedback, and the overhead incurred by using the feet together with the hand. To enable all these studies, we developed a foot tracker based on a Kinect mounted under the desk. We show that the lower body can be used as a valuable complementary modality for computing input. Our third proposition is that by treating body movements as multiple modalities, rather than a single one, we can enable novel user experiences. We develop this proposition in the domain of 3D user interfaces, as it requires input with multiple degrees of freedom and offers a rich set of complex tasks. We propose an approach for tracking the whole body up close, by splitting the sensing of different body parts across multiple sensors. Our setup allows tracking gaze, head, mid-air gestures, multi-touch gestures, and foot movements. We investigate specific applications for multimodal combinations in the domain of 3DUI, specifically how gaze and mid-air gestures can be combined to improve selection and manipulation tasks; how the feet can support the canonical 3DUI tasks; and how a multimodal sensing platform can inspire new 3D game mechanics. We show that the combination of multiple modalities can lead to enhanced task performance, that offloading certain tasks to alternative modalities not only frees the hands, but also allows simultaneous control of multiple degrees of freedom, and that by sensing different modalities separately, we achieve a more detailed and precise full body tracking

    Interactive Tango Milonga: An Interactive Dance System for Argentine Tango Social Dance

    Get PDF
    abstract: When dancers are granted agency over music, as in interactive dance systems, the actors are most often concerned with the problem of creating a staged performance for an audience. However, as is reflected by the above quote, the practice of Argentine tango social dance is most concerned with participants internal experience and their relationship to the broader tango community. In this dissertation I explore creative approaches to enrich the sense of connection, that is, the experience of oneness with a partner and complete immersion in music and dance for Argentine tango dancers by providing agency over musical activities through the use of interactive technology. Specifically, I create an interactive dance system that allows tango dancers to affect and create music via their movements in the context of social dance. The motivations for this work are multifold: 1) to intensify embodied experience of the interplay between dance and music, individual and partner, couple and community, 2) to create shared experience of the conventions of tango dance, and 3) to innovate Argentine tango social dance practice for the purposes of education and increasing musicality in dancers.Dissertation/ThesisDoctoral Dissertation Music 201

    The Year-Long Adventures of the Blue Shoes & Their Friends

    Get PDF
    While participating in a Teacher Workshop organized by Georgina Valverde at the Art Institute of Chicago in 2013, Michael Hill began a one-year artistic and pedagogical odyssey making original images (always featuring some aspect of one or more athletic shoes) and posting them daily to a visual blog he created to help kick-start writing projects among the many student athletes he tutored at the University of Nebraska-Lincoln. He started the year self-identifying as “scholar/teacher,” but at year’s end Michael looked in the mirror and said, OK, still “scholar/ teacher,” but also “artist.” Here are the workshop organizer’s foreword, the scholar’s introduction, the teacher’s formal lesson plan, 52 plates from the artist’s blog, and a proxy example of student work. MICHAEL R. HILL earned two doctorates at the University of Nebraska- Lincoln and was for ten years a tutor in the UNL Department of Athletics. His specialties include archival research, human spatial behavior, visual sociology, and the theories, methods, and histories of the social sciences. Hill is a writer/researcher/artist at D&H Sociologists in St. Joseph, Michigan, and a docent in the Krasl Art Center’s K-12 Understanding Art Program. GEORGINA VALVERDE is an established Chicago artist and Assistant Director of Teacher Programs at the Art Institute of Chicago. (A higher-resolution 100 MB version is available [below] as an additional file.)https://digitalcommons.unl.edu/zeabook/1048/thumbnail.jp

    As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait

    Get PDF
    An ever more sedentary lifestyle is a serious problem in our society. Enhancing people’s exercise adherence through technology remains an important research challenge. We propose a novel approach for a system supporting walking that draws from basic findings in neuroscience research. Our shoe-based prototype senses a person’s footsteps and alters in real-time the frequency spectra of the sound they produce while walking. The resulting sounds are consistent with those produced by either a lighter or heavier body. Our user study showed that modified walking sounds change one’s own perceived body weight and lead to a related gait pattern. In particular, augmenting the high frequencies of the sound leads to the perception of having a thinner body and enhances the motivation for physical activity inducing a more dynamic swing and a shorter heel strike. We here discuss the opportunities and the questions our findings open

    As Light as You Aspire to Be: Changing body perception with sound to support physical activity

    Get PDF
    Supporting exercise adherence through technology remains an important HCI challenge. Recent works showed that altering walking sounds leads people perceiving themselves as thinner/lighter, happier and walking more dynamically. While this novel approach shows potential for physical activity, it raises critical questions impacting technology design. We ran two studies in the context of exertion (gym-step, stairs-climbing) to investigate how individual factors impact the effect of sound and the duration of the after-effects. The results confirm that the effects of sound in body-perception occur even in physically demanding situations and through ubiquitous wearable devices. We also show that the effect of sound interacted with participants’ body weight and masculinity/femininity aspirations, but not with gender. Additionally, changes in body-perceptions did not hold once the feedback stopped; however, body-feelings or behavioural changes appeared to persist for longer. We discuss the results in terms of malleability of body-perception and highlight opportunities for supporting exercise adherence

    Tools for expressive gesture recognition and mapping in rehearsal and performance

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 97-101).As human movement is an incredibly rich mode of communication and expression, performance artists working with digital media often use performers' movement and gestures to control and shape that digital media as part of a theatrical, choreographic, or musical performance. In my own work, I have found that strong, semantically-meaningful mappings between gesture and sound or visuals are necessary to create compelling performance interactions. However, the existing systems for developing mappings between incoming data streams and output media have extremely low-level concepts of "gesture." The actual programming process focuses on low-level sensor data, such as the voltage values of a particular sensor, which limits the user in his or her thinking process, requires users to have significant programming experience, and loses the expressive, meaningful, and metaphor-rich content of the movement. To remedy these difficulties, I have created a new framework and development environment for gestural control of media in rehearsal and performance, allowing users to create clear and intuitive mappings in a simple and flexible manner by using high-level descriptions of gestures and of gestural qualities. This approach, the Gestural Media Framework, recognizes continuous gesture and translates Laban Effort Notation into the realm of technological gesture analysis, allowing for the abstraction and encapsulation of sensor data into movement descriptions. As part of the evaluation of this system, I choreographed four performance pieces that use this system throughout the performance and rehearsal process to map dancers' movements to manipulation of sound and visual elements. This work has been supported by the MIT Media Laboratory.by Elena Naomi Jessop.S.M

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems
    • …
    corecore