21 research outputs found

    Audio in place: media, mobility & HCI: creating meaning in space

    Get PDF
    Audio-based content, location and mobile technologies can offer a multitude of interactional possibilities when combined in innovative and creative ways. It is important not to underestimate impact of the interplay between location, place and sound. Even if intangible and ephemeral, sounds impact upon the way in which we experience the built as well as the natural world. As technology offer us the opportunity to augment and access the world, mobile technologies offer us the opportunity to interact while moving though the world. They are technologies that can mediate, provide and locate experience in the world. Vision, and to some extent the tactile senses have been dominant modalities discussed in experiential terms within HCI. This workshop suggests that there is a need to better understand how sound can be used for shaping and augmenting the experiential qualities of places through mobile computing

    Drawing design futures for shape-changing interfaces

    Get PDF
    Shape-Changing interfaces have the potential to change the world by giving tangible form to computational interactions: but what will we use them for, and what are the implications of adopting such technologies? This doctorate investigates the current breadth of research prototypes, their classifications, limitations and possibilities -- with the ultimate goal of informing application design and usage for shape-change. The interdisciplinary nature of this enquiry employs mixed methodologies such as sketching user scenarios and creating design fictions to inform the field, whilst public facing workshops allow for fresh perspectives on future design and use cases

    More playful user interfaces:interfaces that invite social and physical interaction

    Get PDF

    Analysis and Classification of Shape-Changing Interfaces for Design and Application-based Research

    Get PDF
    Shape-changing interfaces are physically tangible, interactive devices, surfaces, or spaces that allow for rich, organic, and novel experiences with computational devices. Over the last 15 years, research has produced functional prototypes over many use applications; reviews have identified themes and possible future directions but have not yet looked at possible design or application-based research. Here, we gather this information together to provide a reference for designers and researchers wishing to build upon existing prototyping work, using synthesis and discussion of existing shape-changing interface reviews and comprehensive analysis and classification of 84 shape-changing interfaces. Eight categories of prototype are identified alongside recommendations for the field

    Endemic Machines:Acoustic adaptation and evolutionary agents

    Get PDF

    Configuring Corporeality: Performing bodies, vibrations and new musical instruments.

    Get PDF
    How to define the relationship of human bodies, sound and technological instruments in musical performance? This enquiry investigates the issue through an iterative mode of research. Aesthetic and technical insights on sound and body art performance with new musical instruments combine with analytical views on technological embodiment in philosophy and cultural studies. The focus is on corporeality: the physiological, phenomenological and cultural basis of embodied practices. The thesis proposes configuration as an analytical device and a blueprint for artistic creation. Configuration defines the relationship of the human being and technology as one where they affect each other's properties through a continuous, situated negotiation. In musical performance, this involves a performer's intuition, cognition, and sensorimotor skills, an instrument's material, musical and computational properties, and sound's vibrational and auditive qualities. Two particular kinds of configuration feature in this enquiry. One arises from an experiment on the effect of vibration on the sensorimotor system and is fully developed through a subsequent installation for one visitor at a time. The other emerges from a scientific study of gesture expressivity through muscle physiological sensing and is consolidated into an ensuing body art performance for sound and light. Both artworks rely upon intensely intimate sensorial and physical experiences, uses and abuses of the performer's body and bioacoustic sound feedback as a material force. This work contends that particular configurations in musical performance reinforce, alter or disrupt societal criteria against which human bodies and technologies are assessed. Its contributions are: the notion of configuration, which affords an understanding of human-machine co-dependence and its politics; two sound-based artworks, joining and expanding musical performance and body art; two experiments, and their hardware and software tools, providing insights on physiological computing methods for corporeal human-computer interaction

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas

    Sketching as a support mechanism for the design and development of shape-changing interfaces

    Get PDF
    Shape-changing interfaces are a novel computational technology which incorporate physical, tangible, and dynamic surfaces to create a true 3-Dimensional experience. As is often the case with other novel hardware, the current research focus is on iterative hardware design, with devices taking many years to reach potential markets. Whilst the drive to develop novel hardware is vital, this usually occurs without consultation of end-users. Due to the prototypical nature of shape-change, there is no specific current practice of User-Centred Design (UCD). If this is not addressed, the resulting field may consist of undirected, research-focused hardware with little real world value to users. Therefore, the goal of this thesis is to develop an approach to inform the direction of shape-change research, which uses simple, accessible tools and techniques to connect researcher and user. I propose the development of an anticipatory, pre-UCD methodology to frame the field. Sketching is an established methodology. It is also accessible, universal, and provides us with a low-fidelity tool-kit. I therefore propose an exploration of how sketching can support the design and development of shape-changing interfaces. The challenge is approached over five stages: 1) Analysing and categorising shape-changing prototypes to provide the first comprehensive overview of the field; 2) Conducting a systematic review of sketching and HCI research to validate merging sketching, and its associated UCD techniques with highly technological computing research; 3) Using these techniques to explore if non-expert, potential end-users can ideate applications for shape change; 4) Investigating how researchers can utilise subjective sketching for shape-change; 5) Building on ideation and subjective sketching to gather detailed, sketched data from non-expert users with which to generate requirements and models for shape-change. To conclude, I discuss the dialogue between researcher and user, and show how sketching can bring these groups together to inform and elucidate research in this area

    Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback

    Get PDF
    The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships
    corecore