20 research outputs found

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    Breaking the workflow: Design heuristics to support the development of usable digital audio production tools: framing usability heuristics for contemporary purposes

    Get PDF
    The investigation that follows presents the results of a series of workshops with professional musicians and music producers. The work here elicits requirements for musicians in terms of software systems. The scope here explores how to design systems to support creativity and collaboration while maintaining a usable system - one which is effective, efficient and satisfies the user. The format models that of similar workshops, where a three-pronged approach is taken to focus on three different types of creativity: exploratory, combinatorial and transformational approaches. Participants describe a story that defines different user roles and expectations. Focus groups help to refine and combine the existing experiences and begin identify ways in which systems can be made more usable, and support more creative ways of working. We consider the broader consideration of usability, including defining and describing different user types and how their views of usability may differ or even be at odds. Our findings show that while existing systems are very good at supporting traditional usability metrics, they may not consider the broader implications of a considered and holistic user experience

    AVUI: Designing a toolkit for audiovisual interfaces

    Get PDF
    The combined use of sound and image has a rich history, from audiovisual artworks to research exploring the potential of data visualization and sonification. However, we lack standard tools or guidelines for audiovisual (AV) interaction design, particularly for live performance. We propose the AVUI (AudioVisual User Interface), where sound and image are used together in a cohesive way in the interface; and an enabling technology, the ofxAVUI toolkit. AVUI guidelines and ofxAVUI were developed in a three-stage process, together with AV producers: 1) participatory design activities; 2) prototype development; 3) encapsulation of prototype as a plug-in, evaluation, and roll out. Best practices identified include: reconfigurable interfaces and mappings; object-oriented packaging of AV and UI; diverse sound visualization; flexible media manipulation and management. The toolkit and a mobile app developed using it have been released as open-source. Guidelines and toolkit demonstrate the potential of AVUI and offer designers a convenient framework for AV interaction design

    Exciting Instrumental Data: Toward an Expanded Action-Oriented Ontology for Digital Music Performance

    Get PDF
    Musical performance using digital musical instruments has obfuscated the relationship between observable musical gestures and the resultant sound. This is due to the sound producing mechanisms of digital musical instruments being hidden within the digital music making system. The difficulty in observing embodied artistic expression is especially true for musical instruments that are comprised of digital components only. Despite this characteristic of digital music performance practice, this thesis argues that it is possible to bring digital musical performance further within our action-oriented ontology by understanding the digital musician through the lens of LĂ©vi-Strauss’ notion of the bricoleur. Furthermore, by examining musical gestures with these instruments through a multi-tiered analytical framework that accounts for the physical computing elements necessarily present in all digital music making systems, we can further understand and appreciate the intricacies of digital music performance practice and culture

    Algorithmic composition of music in real-time with soft constraints

    Get PDF
    Music has been the subject of formal approaches for a long time, ranging from Pythagoras’ elementary research on tonal systems to J. S. Bach’s elaborate formal composition techniques. Especially in the 20th century, much music was composed based on formal techniques: Algorithmic approaches for composing music were developed by composers like A. Schoenberg as well as in the scientific area. So far, a variety of mathematical techniques have been employed for composing music, e.g. probability models, artificial neural networks or constraint-based reasoning. In the recent time, interactive music systems have become popular: existing songs can be replayed with musical video games and original music can be interactively composed with easy-to-use applications running e.g. on mobile devices. However, applications which algorithmically generate music in real-time based on user interaction are mostly experimental and limited in either interactivity or musicality. There are many enjoyable applications but there are also many opportunities for improvements and novel approaches. The goal of this work is to provide a general and systematic approach for specifying and implementing interactive music systems. We introduce an algebraic framework for interactively composing music in real-time with a reasoning-technique called ‘soft constraints’: this technique allows modeling and solving a large range of problems and is suited particularly well for problems with soft and concurrent optimization goals. Our framework is based on well-known theories for music and soft constraints and allows specifying interactive music systems by declaratively defining ‘how the music should sound’ with respect to both user interaction and musical rules. Based on this core framework, we introduce an approach for interactively generating music similar to existing melodic material. With this approach, musical rules can be defined by playing notes (instead of writing code) in order to make interactively generated melodies comply with a certain musical style. We introduce an implementation of the algebraic framework in .NET and present several concrete applications: ‘The Planets’ is an application controlled by a table-based tangible interface where music can be interactively composed by arranging planet constellations. ‘Fluxus’ is an application geared towards musicians which allows training melodic material that can be used to define musical styles for applications geared towards non-musicians. Based on musical styles trained by the Fluxus sequencer, we introduce a general approach for transforming spatial movements to music and present two concrete applications: the first one is controlled by a touch display, the second one by a motion tracking system. At last, we investigate how interactive music systems can be used in the area of pervasive advertising in general and how our approach can be used to realize ‘interactive advertising jingles’.Musik ist seit langem Gegenstand formaler Untersuchungen, von Phytagoras‘ grundlegender Forschung zu tonalen Systemen bis hin zu J. S. Bachs aufwĂ€ndigen formalen Kompositionstechniken. Vor allem im 20. Jahrhundert wurde vielfach Musik nach formalen Methoden komponiert: Algorithmische AnsĂ€tze zur Komposition von Musik wurden sowohl von Komponisten wie A. Schoenberg als auch im wissenschaftlichem Bereich entwickelt. Bislang wurde eine Vielzahl von mathematischen Methoden zur Komposition von Musik verwendet, z.B. statistische Modelle, kĂŒnstliche neuronale Netze oder Constraint-Probleme. In der letzten Zeit sind interaktive Musiksysteme populĂ€r geworden: Bekannte Songs können mit Musikspielen nachgespielt werden, und mit einfach zu bedienenden Anwendungen kann man neue Musik interaktiv komponieren (z.B. auf mobilen GerĂ€ten). Allerdings sind die meisten Anwendungen, die basierend auf Benutzerinteraktion in Echtzeit algorithmisch Musik generieren, eher experimentell und in InteraktivitĂ€t oder MusikalitĂ€t limitiert. Es gibt viele unterhaltsame Anwendungen, aber ebenso viele Möglichkeiten fĂŒr Verbesserungen und neue AnsĂ€tze. Das Ziel dieser Arbeit ist es, einen allgemeinen und systematischen Ansatz zur Spezifikation und Implementierung von interaktiven Musiksystemen zu entwickeln. Wir stellen ein algebraisches Framework zur interaktiven Komposition von Musik in Echtzeit vor welches auf sog. ‚Soft Constraints‘ basiert, einer Methode aus dem Bereich der kĂŒnstlichen Intelligenz. Mit dieser Methode ist es möglich, eine große Anzahl von Problemen zu modellieren und zu lösen. Sie ist besonders gut geeignet fĂŒr Probleme mit unklaren und widersprĂŒchlichen Optimierungszielen. Unser Framework basiert auf gut erforschten Theorien zu Musik und Soft Constraints und ermöglicht es, interaktive Musiksysteme zu spezifizieren, indem man deklarativ angibt, ‚wie sich die Musik anhören soll‘ in Bezug auf sowohl Benutzerinteraktion als auch musikalische Regeln. Basierend auf diesem Framework stellen wir einen neuen Ansatz vor, um interaktiv Musik zu generieren, die Ă€hnlich zu existierendem melodischen Material ist. Dieser Ansatz ermöglicht es, durch das Spielen von Noten (nicht durch das Schreiben von Programmcode) musikalische Regeln zu definieren, nach denen interaktiv generierte Melodien an einen bestimmten Musikstil angepasst werden. Wir prĂ€sentieren eine Implementierung des algebraischen Frameworks in .NET sowie mehrere konkrete Anwendungen: ‚The Planets‘ ist eine Anwendung fĂŒr einen interaktiven Tisch mit der man Musik komponieren kann, indem man Planetenkonstellationen arrangiert. ‚Fluxus‘ ist eine Anwendung, die sich an Musiker richtet. Sie erlaubt es, melodisches Material zu trainieren, das wiederum als Musikstil in Anwendungen benutzt werden kann, die sich an Nicht-Musiker richten. Basierend auf diesen trainierten Musikstilen stellen wir einen generellen Ansatz vor, um rĂ€umliche Bewegungen in Musik umzusetzen und zwei konkrete Anwendungen basierend auf einem Touch-Display bzw. einem Motion-Tracking-System. Abschließend untersuchen wir, wie interaktive Musiksysteme im Bereich ‚Pervasive Advertising‘ eingesetzt werden können und wie unser Ansatz genutzt werden kann, um ‚interaktive Werbejingles‘ zu realisieren

    Montage As A Participatory System: Interactions with the Moving Image

    Get PDF
    Full version unavailable due to 3rd party copyright restrictionsRecent developments in network culture suggest a weakening of hierarchical narratives of power and representation. Online technologies of distributed authorship appear to nurture a complex, speculative, contradictory and contingent realism. Yet there is a continuing deficit where the moving image is concerned, its very form appearing resistant to the dynamic throughputs and change models of real-time interaction. If the task is not to suspend but encourage disbelief as a condition in the user, how can this be approached as a design problem? In the attempt to build a series of design projects suggesting open architectures for the moving image, might a variety of (pre-digital) precursors from the worlds of art, architecture and film offer the designer models for inspiration or adaptation? A series of projects have been undertaken. Each investigates the composite moving image, specifically in the context of real-time computation and interaction. This arose from a desire to interrogate the qualia of the moving image within interactive systems, relative to a range of behaviours and/or observer positions, which attempt to situate users as conscious compositors. This is explored in the thesis through reflecting on a series of experimental interfaces designed for real time composition in performance, exhibition and online contexts

    Soundtrack-controlled cinematographic systems

    Get PDF
    PhD Thesi
    corecore