1,632 research outputs found
Reduction in Computer Music:Bodies, Temporalities, and Generative Computation
In the age of pervasive computing the way our body interacts with reality needs to be reconceptualized. The reduction of embodiment is a problem for computer music since this music relies heavily on different layers of (digital) technology and mediation in order to be produced and performed. The article shows that such a mediation should not be conceived of as an obstacle but rather as a constitutive element of a permanent, complex negotiation between the artist, the machinery, and the audience, aimed at shaping a different temporality for musical language (as the Italian artist Caterina Barbieri develops).Federica Buongiorno, āReduction in Computer Music: Bodies, Temporalities, and Generative Computationā, in The Case for Reduction, ed. by Christoph F. E. Holzhey and Jakob Schillinger, Cultural Inquiry, 25 (Berlin: ICI Berlin Press, 2022), pp. 175-90 <https://doi.org/10.37050/ci-25_09
Improvising with the threnoscope: integrating code, hardware, GUI, network, and graphic scores
Live coding emphasises improvisation. It is an art practice that merges the act of musical composition and performance into a public act of projected writing. This paper introduces the Threnoscope system, which includes a live coding micro-language for drone-based microtonal composition. The paper discusses the aims and objectives of the system, elucidates the design decisions, and introduces in particular the code score feature present in the Threnoscope. The code score is a novel element in the design of live coding systems allowing for improvisation through a graphic score, rendering a visual representation of past and future events in a real-time performance. The paper demonstrates how the systemās methods can be mapped ad hoc to GUI- or hardware-based control
Recommended from our members
Supporting virtuosity and flow in computer music
As we begin to realise the sonic and expressive potential of the computer, HCI researchers face the challenge of designing rewarding and accessible user experiences that enable individuals to explore complex creative domains such as music.
In performance-based music systems such as sequencers, a disjunction exists between the musicianās specialist skill with performance hardware and the generic usability techniques applied in the design of the software. The creative process is not only fragmented across multiple physical (and virtual) devices, but divided across creativity and productivity phases separated by the act of recording.
Integrating psychologies of expertise and intrinsic motivation, this thesis proposes a design shift from usability to virtuosity, using theories of āflowā (Csikszentmihalyi, 1996) and feedback ālivenessā (Tanimoto, 1990) to identify factors that facilitate learning and creativity in digital notations and interfaces, leading to a set of design heuristics to support virtuosity in notation use. Using the cognitive dimensions of notations framework (Green, 1996), models of the creative user experience are developed, working towards a theoretical framework for HCI in music systems, and specifically computer-aided composition.
Extensive analytical methods are used to look at corollaries of virtuosity and flow in real-world computer music interaction, notably in soundtracking, a software-based composing environment offering a rapid edit-audition feedback cycle, enabled by the userās skill in manipulating the text-based notation (and program) through the computer keyboard. The interaction and development of more than 1,000 sequencer and tracker users was recorded over a period of 2 years, to investigate the nature and development of skill and technique, look for evidence of flow experiences, and establish the use and role of both visual and musical feedback in music software. Quantitative analyses of interaction data are supplemented with a detailed video study of a professional tracker composer, and a user survey that draws on psychometric methods to evaluate flow experiences in the use of digital music notations, such as sequencers and trackers.
Empirical findings broadly support the proposed design heuristics, and enable the development of further models of liveness and flow in notation use. Implications for UI design are discussed in the context of existing music systems, and supporting digitally-mediated creativity in other domains based on notation use
Grappling with movement models: performing arts and slippery contexts
The ways we leave, recognise, and interpret marks of human
movement are deeply entwined with layerings of collective
memory. Although we retroactively order chronological
sediments to map shareable stories, our remediations often
emerge unpredictably from a multidimensional mnemonic
fabric: contemporary ideas can resonate with ancient aspirations and initiatives, and foreign fields of investigation can inform ostensibly unrelated endeavours. Such links reinforce the debunking of grand narratives, and resonate with quests for the new kinds of thinking needed to address the mix of living, technological, and semiotic systems that makes up our wider ecology. As a highly evolving field, movement-and-computing is exceptionally open to, and needy of, this diversity.
This paper argues for awareness of the analytical apparatus
we sometimes too unwittingly bring to bear on our research objects, and for the value of transdisciplinary and
tangential thinking to diversify our research questions. With a view to seeking ways to articulate new, shareable questions rather than propose answers, it looks at wider questions of problem-framing. It emphasises the importance of - quite literally - grounding movement, of recognising its environmental implications and qualities. Informed by work on expressive gesture and creative use of instruments in domains including puppetry and music, this paper also insists on the complexity and heterogeneity of the research strands that are indissociably bound up in our corporeal-technological movement practices
Algorithmic Compositional Methods and their Role in Genesis: A Multi-Functional Real-Time Computer Music System
Algorithmic procedures have been applied in computer music systems to generate compositional products using conventional musical formalism, extensions of such musical formalism and extra-musical disciplines such as mathematical models. This research investigates the applicability of such algorithmic methodologies for real-time musical composition, culminating in Genesis, a multi-functional real-time computer music system written for Mac OS X in the SuperCollider object-oriented programming language, and contained in the accompanying DVD. Through an extensive graphical user interface, Genesis offers musicians the opportunity to explore the application of the sonic features of real-time sound-objects to designated generative processes via different models of interaction such as unsupervised musical composition by Genesis and networked control of external Genesis instances. As a result of the applied interactive, generative and analytical methods, Genesis forms a unique compositional process, with a compositional product that reflects the character of its interactions between the sonic features of real-time sound-objects and its selected algorithmic procedures.
Within this thesis, the technologies involved in algorithmic methodologies used for compositional processes, and the concepts that define their constructs are described, with consequent detailing of their selection and application in Genesis, with audio examples of algorithmic compositional methods demonstrated on the accompanying DVD. To demonstrate the real-time compositional abilities of Genesis, free explorations with instrumentalists, along with studio recordings of the compositional processes available in Genesis are presented in audiovisual examples contained in the accompanying DVD. The evaluation of the Genesis systemās capability to form a real-time compositional process, thereby maintaining real-time interaction between the sonic features of real-time sound objects and its selected algorithmic compositional methods, focuses on existing evaluation techniques founded in HCI and the qualitative issues such evaluation methods present. In terms of the compositional products generated by Genesis, the challenges in quantifying and qualifying its compositional outputs are identified, demonstrating the intricacies of assessing generative methods of compositional processes, and their impact on a resulting compositional product. The thesis concludes by considering further advances and applications of Genesis, and inviting further dissemination of the Genesis system and promotion of research into evaluative methods of generative techniques, with the hope that this may provide additional insight into the relative success of products generated by real-time algorithmic compositional processes
Recommended from our members
Proceedings of the 1st International Conference on Live Coding
Open Access peer reviewed papers on live coding published at the 1st International Conference on Live Coding (ICLC) in Leeds
Multiparametric interfaces for fine-grained control of digital music
Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musiciansā experience of using these systems.
Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams.
The development of these systems and the evaluation of musiciansā experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces.
The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios
- ā¦