88 research outputs found

    Amergent Music: behavior and becoming in technoetic & media arts

    Get PDF
    Merged with duplicate records 10026.1/1082 and 10026.1/2612 on 15.02.2017 by CS (TIS)Technoetic and media arts are environments of mediated interaction and emergence, where meaning is negotiated by individuals through a personal examination and experience—or becoming—within the mediated space. This thesis examines these environments from a musical perspective and considers how sound functions as an analog to this becoming. Five distinct, original musical works explore the possibilities as to how the emergent dynamics of mediated, interactive exchange can be leveraged towards the construction of musical sound. In the context of this research, becoming can be understood relative to Henri Bergson’s description of the appearance of reality—something that is making or unmaking but is never made. Music conceived of a linear model is essentially fixed in time. It is unable to recognize or respond to the becoming of interactive exchange, which is marked by frequent and unpredictable transformation. This research abandons linear musical approaches and looks to generative music as a way to reconcile the dynamics of mediated interaction with a musical listening experience. The specifics of this relationship are conceptualized in the structaural coupling model, which borrows from Maturana & Varela’s “structural coupling.” The person interacting and the generative musical system are compared to autopoietic unities, with each responding to mutual perturbations while maintaining independence and autonomy. Musical autonomy is sustained through generative techniques and organized within a psychogeographical framework. In the way that cities invite use and communicate boundaries, the individual sounds of a musical work create an aural context that is legible to the listener, rendering the consequences or implications of any choice audible. This arrangement of sound, as it relates to human presence in a technoetic environment, challenges many existing assumptions, including the idea “the sound changes.” Change can be viewed as a movement predicated by behavior. Amergent music is brought forth through kinds of change or sonic movement more robustly explored as a dimension of musical behavior. Listeners hear change, but it is the result of behavior that arises from within an autonomous musical system relative to the perturbations sensed within its environment. Amergence propagates through the effects of emergent dynamics coupled to the affective experience of continuous sonic transformation.Rutland Port Authoritie

    Scanning Spaces: Paradigms for Spatial Sonification and Synthesis

    Get PDF
    In 1962 Karlheinz Stockhausen’s “Concept of Unity in Electronic Music” introduced a connection between the parameters of intensity, duration, pitch, and timbre using an accelerating pulse train. In 1973 John Chowning discovered that complex audio spectra could be synthesized by increasing vibrato rates past 20Hz. In both cases the notion of acceleration to produce timbre was critical to discovery. Although both composers also utilized sound spatialization in their works, spatial parameters were not unified with their synthesis techniques. This dissertation examines software studies and multimedia works involving the use of spatial and visual data to produce complex sound spectra. The culmination of these experiments, Spatial Modulation Synthesis, is introduced as a novel, mathematical control paradigm for audio-visual synthesis, providing unified control of spatialization, timbre, and visual form using high-speed sound trajectories.The unique, visual sonification and spatialization rendering paradigms of this disser- tation necessitated the development of an original audio-sample-rate graphics rendering implementation, which, unlike typical multimedia frameworks, provides an exchange of audio-visual data without downsampling or interpolation

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Expanding Eco-Visualization: Sculpting Corn Production

    Get PDF
    This dissertation expands upon the definition of eco-visualization artwork. EV was originally defined in 2006 by Tiffany Holmes as a way to display the real time consumption statistics of key environmental resources for the goal of promoting ecological literacy. I assert that the final forms of EV artworks are not necessarily dependent on technology, and can differ in terms of media used, in that they can be sculptural, video-based, or static two-dimensional forms that communicate interpreted environmental information. There are two main categories of EV: one that is predominantly screen-based and another that employs a variety of modes of representation to visualize environmental information. EVs are political acts, situated in a charged climate of rising awareness, operating within the context of environmentalism and sustainability. I discuss a variety of EV works within the frame of ecopsychology, including EcoArtTech’s Eclipse and Keith Deverell’s Building Run; Andrea Polli’s Cloud Car and Particle Falls; Nathalie Miebach’s series, The Sandy Rides; and Natalie Jeremijenko’s Mussel Choir. The range of EV works provided models for my creative project, Sculpting Corn Production, and a foundation from which I developed a creative methodology. Working to defeat my experience of solastalgia, Sculpting Corn Production is a series of discrete paper sculptures focusing on American industrial corn farming. This EV also functions as a way for me to understand our devastated monoculture landscapes and the politics, economics, and related areas of ecology of our food production

    Immersive analytics for oncology patient cohorts

    Get PDF
    This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness

    Collaborating with the Behaving Machine: simple adaptive dynamical systems for generative and interactive music

    Get PDF
    Situated at the intersection of interactive computer music and generative art, this thesis is inspired by research in Artificial Life and Autonomous Robotics and applies some of the principles and methods of these fields in a practical music context. As such the project points toward a paradigm for computer music research and performance which comple- ments current mainstream approaches and develops upon existing creative applications of Artificial Life research. Many artists have adopted engineering techniques from the field of Artificial Life research as they seem to support a richer interactive experience with computers than is often achieved in digital interactive art. Moreover, the low level aspects of life which the research programme aims to model are often evident in these artistic appropriations in the form of bizarre and abstract but curiously familiar digital forms that somehow, despite their silicon make-up, appear to accord with biological convention. The initial aesthetic motivation for this project was very personal and stemmed from interests in adaptive systems and improvisation and a desire to unite the two. In sim- ple terms, I wanted to invite these synthetic critters up on stage and play with them. There has been some similar research in the musical domain, but this has focused on a very small selection of specific models and techniques which have been predominantly applied as compositional tools rather than for use in live generative music. This thesis considers the advantages of the Alife approach for contemporary computer musicians and offers specific examples of simple adaptive systems as components for both compo- sitional and performance tools. These models have been implemented in a range of generative and interactive works which are described here. These include generative sound installations, interactive instal- lations and a performance system for collaborative man-machine improvisation. Public response at exhibitions and concerts suggests that the approach taken here holds much promise

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore