97 research outputs found
Sculpting Unrealities: Using Machine Learning to Control Audiovisual Compositions in Virtual Reality
This thesis explores the use of interactive machine learning (IML) techniques to control audiovisual compositions within the emerging medium of virtual reality (VR). Accompanying the text is a portfolio of original compositions and open-source software. These research outputs represent the practical elements of the project that help to shed light on the core research question: how can IML techniques be used to control audiovisual compositions in VR? In order to find some answers to this question, it was broken down into its constituent elements. To situate the research, an exploration of the contemporary field of audiovisual art locates the practice between the areas of visual music and generative AV. This exploration of the field results in a new method of categorising the constituent practices. The practice of audiovisual composition is then explored, focusing on the concept of equality. It is found that, throughout the literature, audiovisual artists aim to treat audio and visual material equally. This is interpreted as a desire for balance between the audio and visual material. This concept is then examined in the context of VR. A feeling of presence is found to be central to this new medium and is identified as an important consideration for the audiovisual composer in addition to the senses of sight and sound. Several new terms are formulated which provide the means by which the compositions within the portfolio are analysed. A control system, based on IML techniques, is developed called the Neural AV Mapper. This is used to develop a compositional methodology through the creation of several studies. The outcomes from these studies are incorporated into two live performance pieces, Ventriloquy I and Ventriloquy II. These pieces showcase the use of IML techniques to control audiovisual compositions in a live performance context. The lessons learned from these pieces are incorporated into the development of the ImmersAV toolkit. This open-source software toolkit was built specifically to allow for the exploration of the IML control paradigm within VR. The toolkit provides the means by which the immersive audiovisual compositions, Obj_#3 and Ag Fás Ar Ais Arís are created. Obj_#3 takes the form of an immersive audiovisual sculpture that can be manipulated in real-time by the user. The title of the thesis references the physical act of sculpting audiovisual material. It also refers to the ability of VR to create alternate realities that are not bound to the physics of real-life. This exploration of unrealities emerges as an important aspect of the medium. The final piece in the portfolio, Ag Fás Ar Ais Arís takes the knowledge gained from the earlier work and pushes the boundaries to maximise the potential of the medium and the material
Diffusion Evolved: New Musical Interfaces Applied to Diffusion Performance
This exegesis takes a critical look at the performance paradigm of sound diffusion. In making a shift away from the sixty-year-old practice of performing on a mixing desk or other fader bank console, it proposes and outlines a goal towards intuitive and transparent relationships between performance gesture and spatial trajectories. This is achieved by a coupling of the two previously segmented fields within electroacoustic: spatialisation and interface design. This research explains how connections between the two fields and an embracing of contemporary technological developments, with a goal toward increasing the liveness and gestural input that currently limit sound diffusion practice, could extend the art form into a virtuosic and compelling gestural performance art. The exegesis introduces and describes the author’s research and development of tactile.space, a new multitouch tool developed on the Bricktable for live sound diffusion. tactile.space is intended as a contribution to the growing research area of user interfaces developed specifically for the performance of sound in space. It affords performers a new level of gestural interaction with the space of the concert hall and the audience members and redefines multiple standardised interactions between the performer and the space, the gesture, the audience, and the sound in a diffusion concert
Interfaces for human-centered production and use of computer graphics assets
L'abstract è presente nell'allegato / the abstract is in the attachmen
Bendit_I/O: A System for Extending Mediated and Networked Performance Techniques to Circuit-Bent Devices
Circuit bending—the act of modifying a consumer device\u27s internal circuitry in search of new, previously-unintended responses—provides artists with a chance to subvert expectations for how a certain piece of hardware should be utilized, asking them to view everyday objects as complex electronic instruments. Along with the ability to create avant-garde instruments from unique and nostalgic sound sources, the practice of circuit bending serves as a methodology for exploring the histories of discarded objects through activism, democratization, and creative resurrection. While a rich history of circuit bending continues to inspire artists today, the recent advent of smart musical instruments and the growing number of hybrid tools available for creating connective musical experiences through networks asks us to reconsider the ways in which repurposed devices can continue to play a role in modern sonic art.
Bendit_I/O serves as a synthesis of the technologies and aesthetics of the circuit bending and Networked Musical Performance (NMP) practices. The framework extends techniques native to the practices of telematic and network art to hacked hardware so that artists can design collaborative and mediated experiences that incorporate old devices into new realities. Consisting of user-friendly hardware and software components, Bendit_I/O aims to be an entry point for novice artists into both of the creative realms it brings together.
This document presents details on the components of the Bendit_I/O framework along with an analysis of their use in three new compositions. Additional research serves to place the framework in historical context through literature reviews of previous work undertaken in the circuit bending and networked musical performance practices. Additionally, a case is made for performing hacked consumer hardware across a wireless network, emphasizing how extensions to current circuit bending and NMP practices provide the ability to probe our relationships with hardware through collaborative, mediated, and multimodal methods
Interaction Design for Digital Musical Instruments
The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model.
Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon:
1. The accepted design conventions of the hardware in use
2. Established musical systems, acoustic or digital
3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests
This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware.
This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician
Sound based social networks
The sound environment is an eco of the activity and character of each
place, often carrying additional information to that made available to the eyes
(both new and redundant). It is, therefore, an intangible and volatile acoustic
fingerprint of the place, or simply an acoustic snapshot of a single event. Such
rich resource, full of meaning and subtleness, Schaeffer called Soundscape.
The exploratory research project presented here addresses the Soundscape
in the context of Mobile Online Social Networking, aiming at determining the
extent of its applicability regarding the establishment and/or strengthening of
new and existing social links. Such research goal demanded an interdisciplinary
approach, which we have anchored in three main stems: Soundscapes,
Mobile Sound and Social Networking. These three areas pave the scientific
ground for this study and are introduced during the first part of the thesis. An
extensive survey of the state-of-the-arte projects related with this research is
also presented, gathering examples from different but adjacent areas such as
mobile sensing, wearable computing, sonification, social media and contextaware
computing. This survey validates that our approach is scientifically opportune
and unique, at the same time.
Furthermore, in order to assess the role of Soundscapes in the context
of Social Networking, an experimental procedure has been implemented
based on an Online Social Networking mobile application, enriched with environmental
sensing mechanisms, able to capture and analyze the surrounding
Soundscape and users' movements. Two main goals guided this prototypal
research tool: collecting data regarding users' activity (both sonic and kinetic)
and providing users with a real experience using a Sound-Based Social Network,
in order to collect informed opinions about this unique type of Social
Networking. The application – Hurly-Burly – senses the surrounding Soundscape
and analyzes it using machine audition techniques, classifying it according
to four categories: speech, music, environmental sounds and silence. Additionally, it determines the sound pressure level of the sensed Soundscape
in dB(A)eq. This information is then broadcasted to the entire online social
network of the user, allowing each element to visualize and audition a representation
of the collected data. An individual record for each user is kept
available in a webserver and can be accessed through an online application,
displaying the continuous acoustic profile of each user along a timeline graph.
The experimental procedure included three different test groups, forming each
one a social network with a cluster coefficient equal to one.
After the implementation and result analysis stages we concluded that
Soundscapes can have a role in the Online Social Networking paradigm, specially
when concerning mobile applications. Has been proven that current offthe-
shelf mobile technology is a promising opportunity for accomplishing this
kind of tasks (such as continuous monitoring, life logging and environment
sensing) but battery limitations and multitasking's constraints are still the bottleneck,
hindering the massification of successful applications. Additionally,
online privacy is something that users are not enthusiastic in letting go: using
captured sound instead of representations of the sound would abstain users
from utilizing such applications. We also demonstrated that users who are
more aware of the Soundscape concept are also more inclined to assume it
as playing an important role in OSN. This means that more pedagogy towards
the acoustic phenomenon is needed and this type of research gives a step
further in that direction.O ambiente sonoro de um lugar é um eco da sua atividade e carácter,
transportando, na maior parte da vezes, informação adicional àquela que é
proporcionada à visão (quer seja redundante ou complementar). É, portanto,
uma impressão digital acústica - tangível e volátil - do lugar a que pertence,
ou simplesmente uma fotografia acústica de um evento pontual. A este opulento
recurso, carregado de significados e subtilezas, Schafer chamou de
Paisagem-Sonora. O projeto de investigação de carácter exploratório que
aqui apresentamos visa o estudo da Paisagem-Sonora no contexto das Redes
Sociais Móveis Em-Linha, procurando entender os moldes e limites da
sua aplicação, tendo em vista o estabelecimento e/ou reforço de novos ou
existente laços sociais, respectivamente. Para satisfazer este objectivo foi
necessária uma abordagem multidisciplinar, ancorada em três pilares principais:
a Paisagem-Sonora, o Som Móvel e as Redes Sociais. Estas três áreas
determinaram a moldura científica de referência em que se enquadrou esta
investigação, sendo explanadas na primeira parte da tese. Um extenso levantamento
do estado-da-arte referente a projetos relacionados com este estudo
é também apresentado, compilando exemplos de áreas distintas mas adjacentes,
tais como: Computação Sensorial Móvel, Computação Vestível, Sonificação,
Média Social e Computação Contexto-Dependente. Este levantamento
veio confirmar quer a originalidade quer a pertinência científica do projeto
apresentado.
Posteriormente, a fim de avaliar o papel da Paisagem-Sonora no contexto
das Redes Sociais, foi posto em prática um procedimento experimental
baseado numa Rede Social Sonora Em-Linha, desenvolvida de raiz para dispositivos
móveis e acrescida de mecanismos sensoriais para estímulos ambientais,
capazes de analisar a Paisagem-Sonora envolvente e os movimentos
do utilizador. Dois objectivos principais guiaram a produção desta ferramenta
de investigação: recolher dados relativos à atividade cinética e sonora dos utilizadores e proporcionar a estes uma experiência real de utilização
uma Rede Social Sonora, de modo a recolher opiniões fundamentadas sobre
esta tipologia específica de socialização. A aplicação – Hurly-Burly – analisa
a Paisagem-Sonora através de algoritmos de Audição Computacional, classificando-
a de acordo com quatro categorias: diálogo (voz), música, sons ambientais
(“ruídos”) e silêncio. Adicionalmente, determina o seu nível de pressão
sonora em dB(A)eq. Esta informação é então distribuída pela rede social
dos utilizadores, permitindo a cada elemento visualizar e ouvir uma representação
do som analisado. É mantido num servidor Web um registo individual
da informação sonora e cinética captada, o qual pode ser acedido através de
uma aplicação Web que mostra o perfil sonoro de cada utilizador ao longo do
tempo, numa visualização ao estilo linha-temporal. O procedimento experimental
incluiu três grupos de teste distintos, formando cada um a sua própria
rede social com coeficiente de aglomeração igual a um. Após a implementação
da experiência e análise de resultados, concluímos que a Paisagem-
Sonora pode desempenhar um papel no paradigma das Redes Sociais Em-
Linha, em particular no que diz respeito à sua presença nos dispositivos móveis.
Ficou provado que os dispositivos móveis comerciais da atualidade
apresentam-se com uma oportunidade promissora para desempenhar este
tipo de tarefas (tais como: monitorização contínua, registo quotidiano e análise
sensorial ambiental), mas as limitações relacionadas com a autonomia
energética e funcionamento em multitarefa representam ainda um constrangimento
que impede a sua massificação. Além disso, a privacidade no mundo
virtual é algo que os utilizadores atuais não estão dispostos a abdicar: partilhar
continuamente a Paisagem-Sonora real em detrimento de uma representação
de alto nível é algo que refrearia os utilizadores de usar a aplicação.
Também demonstrámos que os utilizadores que mais conhecedores do fenómeno
da Paisagem-Sonora são também os que consideram esta como importante
no contexto das Redes Sociais Em-Linha. Isso significa que uma atitude
pedagógica em relação ao fenómeno sonoro é essencial para obter dele
o maior ganho possível. Esta investigação propõe-se a dar um passo em
frente nessa direção
Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments
Interdependent collaboration is a system of live musical performance in which performers can directly manipulate each other’s musical outcomes. While most collaborative musical systems implement electronic communication channels between players that allow for parameter mappings, remote transmissions of actions and intentions, or exchanges of musical fragments, they interrupt the energy continuum between gesture and sound, breaking our cognitive representation of gesture to sound dynamics.
Physics-based virtual instruments allow for acoustically and physically plausible behaviors that are related to (and can be extended beyond) our experience of the physical world. They inherently maintain and respect a representation of the gesture to sound energy continuum.
This research explores the design and implementation of custom physics-based virtual instruments for realtime interdependent collaborative performance. It leverages the inherently physically plausible behaviors of physics-based models to create dynamic, nuanced, and expressive interconnections between performers. Design considerations, criteria, and frameworks are distilled from the literature in order to develop three new physics-based virtual instruments and associated compositions intended for dissemination and live performance by the electronic music and instrumental music communities. Conceptual, technical, and artistic details and challenges are described, and reflections and evaluations by the composer-designer and performers are documented
Storytelling: The Human Experience Through Data-Driven Instruments
Digital Portfolio Dissertation files include: Dissertation text document; zipped computer files; and 7 performance videos linked for streaming at this URL on University of Oregon Panopto service: https://uoregon.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%22eba8a7df-69b3-4e67-a924-b0a001853032%22. Archival copies of videos have been preserved by the UO Libraries.This Digital Portfolio Dissertation is a collection of seven original real-time, interactive,
multichannel compositions featuring data-driven instruments. This dissertation includes video
recordings of seven individual performances, associated files needed for the performance of each
work, and a descriptive text document for each of the seven compositions. The text document in
this digital portfolio dissertation describes the storytelling components and the musical ideas and
compositional structure of each composition, the design and implementation of each data-driven
instrument, the sonic materials and data mapping strategies, as well as other extra-musical
elements associated with each composition
- …