5,714 research outputs found

    Mobility is the Message: Experiments with Mobile Media Sharing

    Get PDF
    This thesis explores new mobile media sharing applications by building, deploying, and studying their use. While we share media in many different ways both on the web and on mobile phones, there are few ways of sharing media with people physically near us. Studied were three designed and built systems: Push!Music, Columbus, and Portrait Catalog, as well as a fourth commercially available system – Foursquare. This thesis offers four contributions: First, it explores the design space of co-present media sharing of four test systems. Second, through user studies of these systems it reports on how these come to be used. Third, it explores new ways of conducting trials as the technical mobile landscape has changed. Last, we look at how the technical solutions demonstrate different lines of thinking from how similar solutions might look today. Through a Human-Computer Interaction methodology of design, build, and study, we look at systems through the eyes of embodied interaction and examine how the systems come to be in use. Using Goffman’s understanding of social order, we see how these mobile media sharing systems allow people to actively present themselves through these media. In turn, using McLuhan’s way of understanding media, we reflect on how these new systems enable a new type of medium distinct from the web centric media, and how this relates directly to mobility. While media sharing is something that takes place everywhere in western society, it is still tied to the way media is shared through computers. Although often mobile, they do not consider the mobile settings. The systems in this thesis treat mobility as an opportunity for design. It is still left to see how this mobile media sharing will come to present itself in people’s everyday life, and when it does, how we will come to understand it and how it will transform society as a medium distinct from those before. This thesis gives a glimpse at what this future will look like

    Social retrieval of music content in multi-user performance

    Get PDF
    An emerging trend in interactive music performance consists of the audience directly participating in the performance by means of mobile devices. This is a step forward with respect to concepts like active listening and collaborative music making: non-expert members of an audience are enabled to directly participate in a creative activity such as the performance. This requires the availability of technologies for capturing and analysing in real-time the natural behaviour of the users/performers, with particular reference to non- verbal expressive and social behaviour. This paper presents a prototype of a non-verbal expressive and social search engine and active listening system, enabling two teams of non-expert users to act as performers. The performance consists of real-time sonic manipulation and mixing of music pieces selected according to features characterising performers\u2019 movements captured by mobile devices. The system is described with specific reference to the SIEMPRE Podium Performance, a non-verbal socio-mobile music performance presented at the Art & ICT Exhibition that took place in Vilnius (LI) in November 2013

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation

    Sound for Fantasy and Freedom

    Get PDF
    Sound is an integral part of our everyday lives. Sound tells us about physical events in the environ- ment, and we use our voices to share ideas and emotions through sound. When navigating the world on a day-to-day basis, most of us use a balanced mix of stimuli from our eyes, ears and other senses to get along. We do this totally naturally and without effort. In the design of computer game experiences, traditionally, most attention has been given to vision rather than the balanced mix of stimuli from our eyes, ears and other senses most of us use to navigate the world on a day to day basis. The risk is that this emphasis neglects types of interaction with the game needed to create an immersive experience. This chapter summarizes the relationship between sound properties, GameFlow and immersive experience and discusses two projects in which Interactive Institute, Sonic Studio has balanced perceptual stimuli and game mechanics to inspire and create new game concepts that liberate users and their imagination

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Noise cancelling headphones & the neoliberal subject

    Get PDF
    Active noise cancelling (ANC) headphones grant an individual the ability to define and create personal sonic borders in real time. While this promise offers individuals a form of sonic escapism, I suggest that the technology is cloaked in neoliberal cultural values which promote individualized thinking, capital interest attained through increased focus, control of both the consumer and their sonic environment, and a Euro-centric perception of rationality and knowledge formation (J. H. Clarke et al., 2007; Gane, 2008; Houghton, 2019; Lazzarato, 2009). The technology dissolves opportunities for embodied sonic connection to land, community, and nonhuman agents which are strengthened through attentive and unmediated listening practices (Classen, 1999; Feld, 2012; Gross, 2014; Robinson, 2020; Simpson, 2011). Through a case study of Bose’s 700 NC and Apple’s Airpods Pro noise-cancelling headphones, this thesis works to uncover the ways in which the technology reproduces neoliberal ideologies utilizing CDA (Amoussue & Allagbe, 2018; Fairclough, 2001; Van Dijk, 2003) to consider how both companies advertise their noise-cancelling headphones and prioritize the neoliberal subject. Additionally, a collection of soundwalks are performed to compare the promises offered by the marketing campaigns through autoethnographic research (Behrendt, 2018; Sterne, 2003; Westerkamp, 2006). To juxtapose these neoliberal values and to offer moments for decolonial perspectives, this thesis addresses Indigenous, specifically Anishinaabe, literature on listening and sonic dimensions to consider the ways in which unmediated listening may offer moments of embodied knowledge which emerge from and through critical self-reflexivity, an awareness of an individual’s listening positionality, and a perspective on spatial intersubjectivity

    Running to Your Own Beat:An Embodied Approach to Auditory Display Design

    Get PDF
    Personal fitness trackers represent a multi-billion-dollar industry, predicated on devices for assisting users in achieving their health goals. However, most current products only offer activity tracking and measurement of performance metrics, which do not ultimately address the need for technique related assistive feedback in a cost-effective way. Addressing this gap in the design space for assistive run training interfaces is also crucial in combating the negative effects of Forward Head Position, a condition resulting from mobile device use, with a rapid growth of incidence in the population. As such, Auditory Displays (AD) offer an innovative set of tools for creating such a device for runners. ADs present the opportunity to design interfaces which allow natural unencumbered motion, detached from the mobile or smartwatch screen, thus making them ideal for providing real-time assistive feedback for correcting head posture during running. However, issues with AD design have centred around overall usability and user-experience, therefore, in this thesis an ecological and embodied approach to AD design is presented as a vehicle for designing an assistive auditory interface for runners, which integrates seamlessly into their everyday environments

    Corseto: A Kinesthetic Garment for Designing, Composing for, and Experiencing an Intersubjective Haptic Voice

    Get PDF
    We present a novel intercorporeal experience - an intersubjective haptic voice. Through an autobiographical design inquiry, based on singing techniques from the classical opera tradition, we created Corsetto, a kinesthetic garment for transferring somatic reminiscents of vocal experience from an expert singer to a listener. We then composed haptic gestures enacted in the Corsetto, emulating upper-body movements of the live singer performing a piece by Morton Feldman named Three Voices. The gestures in the Corsetto added a haptics-based \u27fourth voice\u27 to the immersive opera performance. Finally, we invited audiences who were asked to wear Corsetto during live performances. Afterwards they engaged in micro-phenomenological interviews. The analysis revealed how the Corsetto managed to bridge inner and outer bodily sensations, creating a feeling of a shared intercorporeal experience, dissolving boundaries between listener, singer and performance. We propose that \u27intersubjective haptics\u27 can be a generative medium not only for singing performances, but other possible intersubjective experiences
    corecore