13 research outputs found

    Surface Electromyography for Direct Vocal Control

    Get PDF
    This paper introduces a new method for direct control using the voice via measurement of vocal muscular activation with surface electromyography (sEMG). Digital musical interfaces based on the voice have typically used indirect control, in which features extracted from audio signals control the parameters of sound generation, for example in audio to MIDI controllers. By contrast, focusing on the musculature of the singing voice allows direct muscular control, or alternatively, combined direct and indirect control in an augmented vocal instrument. In this way we aim to both preserve the intimate relationship a vocalist has with their instrument and key timbral and stylistic characteristics of the voice while expanding its sonic capabilities. This paper discusses other digital instruments which effectively utilise a combination of indirect and direct control as well as a history of controllers involving the voice. Subsequently, a new method of direct control from physiological aspects of singing through sEMG and its capabilities are discussed. Future developments of the system are further outlined along with usage in performance studies, interactive live vocal performance, and educational and practice tools

    Time's up for the Myo? The smartwatch as a ubiquitous alternative for audio-gestural analyses

    Get PDF
    The utility of gestural technologies in broadening analytical- and expressive-interface possibilities has been documented extensively; both within the sphere of NIME and beyond. Wearable gestural sensors have proved integral components of many past NIMEs. Previous implementations have typically made use of specialist, IMU and EMG based gestural technologies. Few have proved, singularly, as popular as the Myo armband. An informal review of the NIME archives found that the Myo has featured in 21 NIME publications, since an initial declaration of the Myo’s promise as “a new standard controller in the NIME community” by Nyomen et al. in 2015. Ten of those found were published after the Myo’s discontinuation in 2018, including three as recently as 2022. This paper details an assessment of smartwatch-based IMU and audio logging as a ubiquitous, accessible alternative to the IMU capabilities of the Myo armband. Six violinists were recorded performing a number of exercises using VioLogger; a purpose-built application for the Apple Watch. Participants were simultaneously recorded using a Myo armband and a freestanding microphone. Initial testing upon this pilot dataset indicated promising results for the purposes of audio-gestural analysis; both implementations demonstrated similar efficacy for the purposes of MLP-based bow-stroke classification

    Myo Mapper: a Myo armband to OSC mapper

    Get PDF
    Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a `quick and easy' solution for exploring the Myo's potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software

    Designing Gestures for Continuous Sonic Interaction

    Get PDF
    We present a system that allows users to try different ways to train neural networks and temporal modelling to asso- ciate gestures with time-varying sound. We created a soft- ware framework for this and evaluated it in a workshop- based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to de- sign gestures for performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle sens- ing (EMG) device. We presented the user with two classical techniques from the literature, Static Position regression and Hidden Markov based temporal modelling, and pro- pose a new technique for capturing gesture anchor points on the fly as training data for neural network based regression, called Windowed Regression. Our results show trade- offs between accurate, predictable reproduction of source sounds and exploration of the gesture-sound space. Several users were attracted to our windowed regression technique. This paper will be of interest to musicians engaged in going from sound design to gesture design and offers a workflow for interactive machine learning

    Singing Knit: Soft Knit Biosensing for Augmenting Vocal Performances

    Get PDF
    This paper discusses the design of the Singing Knit, a wearable knit collar for measuring a singer's vocal interactions through surface electromyography. We improve the ease and comfort of multi-electrode bio-sensing systems by adapting knit e-textile methods. The goal of the design was to preserve the capabilities of rigid electrode sensing while addressing its shortcomings, focusing on comfort and reliability during extended wear, practicality and convenience for performance settings, and aesthetic value. We use conductive, silver-plated nylon jersey fabric electrodes in a full rib knit accessory for sensing laryngeal muscular activation. We discuss the iterative design and the material decision-making process as a method for building integrated soft-sensing wearable systems for similar settings. Additionally, we discuss how the design choices through the construction process reflect its use in a musical performance context

    Embodied interaction with guitars: instruments, embodied practices and ecologies

    Get PDF
    In this thesis I investigate the embodied performance preparation practices of guitarists to design and develop tools to support them. To do so, I employ a series of human-centred design methodologies such as design ethnography, participatory design, and soma design. The initial ethnographic study I conducted involved observing guitarists preparing to perform individually and with their bands in their habitual places of practice. I also interviewed these musicians on their preparation activities. Findings of this study allowed me to chart an ecology of tools and resources employed in the process, as well as pinpoint a series of design opportunities for augmenting guitars, namely supporting (1) encumbered interactions, (2) contextual interactions, and (3) connected interactions. Going forward with the design process I focused on remediating encumbered interactions that emerge during performance preparation with multimedia devices, particularly during instrumental transcription. I then prepared and ran a series of hands-on co-design workshops with guitarists to discuss five media controller prototypes, namely, instrument-mounted controls, pedal-based controls, voice-based controls, gesture-based controls, and “music-based” controls. This study highlighted the value that guitarists give to their guitars and to their existing practice spaces, tools, and resources by critically reflecting on how these interaction modalities would support or disturb their existing embodied preparation practices with the instrument. In parallel with this study, I had the opportunity to participate in a soma design workshop (and then prepare my own) in which I harnessed my first-person perspective of guitar playing to guide the design process. By exploring a series of embodied ideation and somatic methods, as well as materials and sensors across several points of contact between our bodies and the guitar, we collaboratively ideated a series of design concepts for guitar across both workshops, such as a series of breathing guitars, stretchy straps, and soft pedals. I then continued to develop and refine the Stretchy Strap concept into a guitar strap augmented with electronic textile stretch sensors to harness it as an embodied media controller to remediate encumbered interaction during musical transcription with guitar when using secondary multimedia resources. The device was subsequently evaluated by guitarists at a home practicing space, providing insights on nuanced aspects of its embodied use, such as how certain media control actions like play and pause are better supported by the bodily gestures enacted with the strap, whilst other actions, like rewinding the play back or setting in and out points for a loop are better supported by existing peripherals like keyboards and mice, as these activities do not necessarily happen in the flow of the embodied practice of musical transcription. Reflecting on the overall design process, a series of considerations are extracted for designing embodied interactions with guitars, namely, (1) considering the instrument and its potential for augmentation, i.e., considering the shape of the guitar, its material and its cultural identity, (2) considering the embodied practices with the instrument, i.e., the body and the subjective felt experience of the guitarist during their skilled embodied practices with the instrument and how these determine its expert use according to a particular instrumental tradition and/or musical practice, and (3) considering the practice ecology of the guitarist, i.e., the tools, resources, and spaces they use according to their practice

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas

    ESCOM 2017 Proceedings

    Get PDF
    corecore