147 research outputs found

    Reflections on Eight Years of Instrument Creation with Machine Learning

    Get PDF
    Machine learning (ML) has been used to create mappings for digital musical instruments for over twenty-five years, and numerous ML toolkits have been developed for the NIME community. However, little published work has studied how ML has been used in sustained instrument building and performance practices. This paper examines the experiences of instrument builder and performer Laetitia Sonami, who has been using ML to build and refine her Spring Spyre instrument since 2012. Using Sonami’s current practice as a case study, this paper explores the utility, opportunities, and challenges involved in using ML in practice over many years. This paper also reports the perspective of Rebecca Fiebrink, the creator of the Wekinator ML tool used by Sonami, revealing how her work with Sonami has led to changes to the software and to her teaching. This paper thus contributes a deeper understanding of the value of ML for NIME practitioners, and it can inform design considerations for future ML toolkits as well as NIME pedagogy. Further, it provides new perspectives on familiar NIME conversations about mapping strategies, expressivity, and control, informed by a dedicated practice over many years

    Machine Learning, Music and Creativity: An Interview with Rebecca Fiebrink

    Get PDF
    Rebecca Fiebrink is a Senior Lecturer at Goldsmiths, University of London, where she designs new ways for humans to interact with computers in creative practice. As a computer scientist and musician, much of her work focuses on applications of machine learning to music, addressing research questions such as: ‘How can machine learning algorithms help people to create new musical instruments and interactions?’ and ‘How does machine learning change the type of musical systems that can be created, the creative relationships between people and technology, and the set of people who can create new technologies?’ Much of Fiebrink’s work is also driven by a belief in the importance of inclusion, participation, and accessibility. She frequently uses participatory design processes, and she is currently involved in creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Fie-brink is the developer of the Wekinator: open-source software for real-time interac-tive machine learning, whose current version has been downloaded over 10,000 times. She is the creator of a MOOC titled “Machine Learning for Artists and Musicians.” She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She has performed with a variety of musical ensembles playing flute, keyboard, and laptop. She holds a PhD in Computer Science from Princeton University

    Creative Diversity: Expanding Software for 3D Human Avatar Design

    Get PDF
    This research designs 3D human avatar generation software for amateur creative users. Currently available software relies on limiting the range of possible bodies that the user is able to create, within the boundaries of normative physicality, in order to simplify interaction for users without 3D modeling skills. Rather than artificially limiting user output, we are creating open source software that expands the range of bodies able to be represented in program, following a user centered design process to implement direct manipulation techniques extrapolated from artistic practice. This paper describes the background context, aims, and current research activities related to creating this software

    Creating Latent Spaces for Modern Music Genre Rhythms Using Minimal Training Data

    Get PDF
    In this paper we present R-VAE, a system designed for the exploration of latent spaces of musical rhythms. Unlike most previous work in rhythm modeling, R-VAE can be trained with small datasets, enabling rapid customization and exploration by individual users. R-VAE employs a data representation that encodes simple and compound meter rhythms. To the best of our knowledge, this is the first time that a network architecture has been used to encode rhythms with these characteristics, which are common in some modern popular music genres

    Data as design tool. How understanding data as a user interface can make end-user design more accessible, efficient, effective, and embodied, while challenging machine learning conventions.

    Get PDF
    We often assume "data" is something that is collected or measured from a passive source. In machine learning, we talk about "ground truth" data, because we assume the data represents something true and real; we aim to analyse and represent data appropriately, so that it will yield a window through which we can better understand some latent property of the world. In this talk, I will describe an alternative understanding of data, in which data is something that people can actively, subjectively, and playfully manipulate. Applying modelling algorithms to intentionally manipulated data—such as examples of human movements, sounds, or social media feeds— enables everyday people to build new types of real-time interactions, including new musical instruments, sonifications, or games. In these contexts, data becomes an interface through which people communicate embodied practices, design goals, and aesthetic preferences to computers. This interface can allow people to design new real-time systems more efficiently, to explore a design space more fully, and to create systems with a particular “feel,” while also making design accessible for non-programmers

    MetaVR: Understanding metaphors in the mind and relation to emotion through immersive, spatial interaction

    Get PDF
    Metaphorical thinking acts as a bridge between embodiment and abstraction and helps to flexibly organize human knowledge and behavior. Yet its role in embodied human-computer interface de- sign, and its potential for supporting goals such as self-awareness and well-being, have not been extensively explored in the HCI community. We have designed a system called MetaVR to support the creation and exploration of immersive, multimodal, metaphoric experiences, in which people’s bodily actions in the physical world are linked to metaphorically relevant actions in a virtual reality world. As a team of researchers in interaction, neuroscience, and linguistics, we have created MetaVR to support research exploring the impact of such metaphoric interactions on human emotion and well-being. We have used MetaVR to create a proof-of-concept interface for immersive, spatial interactions underpinned by the WELL-BEING is VERTICALITY conceptual mapping—the known association of ‘good’=‘up’ and ‘bad’=‘down’. Researchers and developers can currently interact with this proof of concept to configure various metaphoric interactions or personifications that have positive associations (e.g., ‘being like a butterfly’ or ‘being like a flower’) and also involve vertical motion (e.g., a butterfly might fly upwards, or a flower might bloom upwards). Importantly, the metaphoric interactions supported in MetaVR do not link human movement to VR actions in one-to-one ways, but rather use abstracted relational mappings in which events in VR (e.g., the blooming of a virtual flower) are contingent not merely on a “correct” gesture being per- formed, but on aspects of verticality exhibited in human movement (e.g., in a very simple case, the time a person’s hands spend above some height threshold). This work thus serves as a small-scale vehicle for us to re- search how such interactions may impact well-being. Relatedly, it highlights the potential of using virtual embodied interaction as a tool to study cognitive processes involved in more deliberate/functional uses of metaphor and how this relates to emotion processing. By demonstrating MetaVR and metaphoric interactions designed with it at CHI Interactivity, and by offering the MetaVR tool to other researchers, we hope to inspire new perspectives, dis- cussion, and research within the HCI community about the role that such metaphoric interaction may play, in interfaces designed for well-being and beyond

    The Effects of a Soundtrack on Board Game Player Experience

    Get PDF
    Board gaming is a popular hobby that increasingly features the inclusion of technology, yet little research has sought to under- stand how board game player experience is impacted by digital augmentation or to inform the design of new technology-enhanced games. We present a mixed-methods study exploring how the presence of music and sound effects impacts the player experience of a board game. We found that the soundtrack increased the enjoyment and tension experienced by players during game play. We also found that a soundtrack provided atmosphere surrounding the gaming experience, though players did not necessarily experience this as enhancing the world-building capabilities of the game. We discuss how our findings can inform the design of new games and soundtracks as well as future research into board game player experience

    Interactive Machine Learning for End-User

    Full text link
    User interaction with intelligent systems need not be limited to interaction where pre-trained software has intelligence “baked in.” End-user training, including interactive machine learning (IML) approaches, can enable users to create and customise systems themselves. We propose that the user experience of these users is worth considering. Furthermore, the user experience of system developers—people who may train and configure both learning algorithms and their user interfaces—also deserves attention. We additionally propose that IML can improve user experiences by supporting usercentred design processes, and that there is a further role for user-centred design in improving interactive and classical machine learning systems. We are developing this approach and embodying it through the design of a new User Innovation Toolkit, in the context of the European Commission-funded project RAPID-MIX

    Toward Supporting End-User Design of Soundscape Sonifications

    Get PDF
    In this paper, we explore the potential for everyday Twitter users to design and use soundscape sonifications as an alternative, “calm” modality for staying informed of Twitter activity. We first present the results of a survey assessing how 100 Twitter users currently use and change audio notifications. We then present a study in which 9 frequent Twitter users employed two user interfaces—with varying degrees of automation—to design, customize, and use soundscape sonifications of Twitter data. This work suggests that soundscapes have great potential for creating a calm technology for maintaining awareness of Twitter data, and that soundscapes can be useful in helping people without prior experience in sound design think about sound in sophisticated ways and engage meaningfully in sonification design

    Introduction to the Special Issue on Human-Centered Machine Learning

    Get PDF
    Machine learning is one of the most important and successful techniques in contemporary computer science. Although it can be applied to myriad problems of human interest, research in machine learning is often framed in an impersonal way, as merely algorithms being applied to model data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, deciding what should be modeled in the first place, and using the outcomes of machine learning in the real world. Examining machine learning from a human-centered perspective includes explicitly recognizing human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the co-adaptation of humans and intelligent systems. A human-centered understanding of machine learning in human contexts can lead not only to more usable machine learning tools, but to new ways of understanding what machine learning is good for and how to make it more useful. This special issue brings together nine papers that present different ways to frame machine learning in a human context. They represent very different application areas (from medicine to audio) and methodologies (including machine learning methods, HCI methods, and hybrids), but they all explore the human contexts in which machine learning is used. This introduction summarizes the papers in this issue and draws out some common themes
    corecore