3,274 research outputs found

    Real-time MIDI Performance Evaluation for Beginning Piano Students

    Get PDF
    MIDI is a standard digital protocol for the communication of musical events. In this paper, we examine the construction of a complex system to validate musical performance characteristics without compromising musical interpretation through the use and evaluation of MIDI messages. Beginning music students often have a difficult time translating written, musical characteristics to the correlating sound that they imply. Even though a teacher can effectively help a student through this learning process, the process can typically be slow, as evaluation of musical performances happens only once a week during a thirty minute lesson. Prior research has shown that a model, such as the proposed system, increases the pace of learning. While some commercial, Windows-based applications do exist, we propose a solution for evaluation in real-time giving the student immediate feedback rather than at the end of a performance. We take a detailed look at the development process of our application, Blunote, in a Linux environment built on ALSA (Advanced Linux Sound Architecture) for the beginning piano student

    THE RISE OF WAVETABLE SYNTHESIS IN COMMERCIAL MUSIC AND ITS CREATIVE APPLICATIONS

    Get PDF
    Wavetable synthesis is a powerful tool for music creation that helps composers and producers develop their own unique sounds. Though wavetable synthesis has been utilized in music since the early 1980s, advancements in computer technologies in the 2000s and the subsequent releases of software synthesizers in the late 2000s and early 2010s has led to the increased presence of wavetable synthesis in commercial music. This thesis chronicles a historical overview of the use of wavetable synthesis in commercial music and demonstrates the accessibility and power that wavetable synthesis delivers in music creation. The demonstration portion of this thesis features two original compositions in the style of electronic dance music (EDM) that prominently incorporate original wavetable instruments created from recordings of two motorized vehicles, as well as an overview of the processes of their creation

    The Analogue Computer as a Voltage-Controlled Synthesiser

    Get PDF
    This paper re-appraises the role of analogue computers within electronic and computer music and provides some pointers to future areas of research. It begins by introducing the idea of analogue computing and placing in the context of sound and music applications. This is followed by a brief examination of the classic constituents of an analogue computer, contrasting these with the typical modular voltage-controlled synthesiser. Two examples are presented, leading to a discussion on some parallels between these two technologies. This is followed by an examination of the current state-of-the-art in analogue computation and its prospects for applications in computer and electronic music

    hpDJ: An automated DJ with floorshow feedback

    No full text
    Many radio stations and nightclubs employ Disk-Jockeys (DJs) to provide a continuous uninterrupted stream or “mix” of dance music, built from a sequence of individual song-tracks. In the last decade, commercial pre-recorded compilation CDs of DJ mixes have become a growth market. DJs exercise skill in deciding an appropriate sequence of tracks and in mixing 'seamlessly' from one track to the next. Online access to large-scale archives of digitized music via automated music information retrieval systems offers users the possibility of discovering many songs they like, but the majority of consumers are unlikely to want to learn the DJ skills of sequencing and mixing. This paper describes hpDJ, an automatic method by which compilations of dance-music can be sequenced and seamlessly mixed by computer, with minimal user involvement. The user may specify a selection of tracks, and may give a qualitative indication of the type of mix required. The resultant mix can be presented as a continuous single digital audio file, whether for burning to CD, or for play-out from a personal playback device such as an iPod, or for play-out to rooms full of dancers in a nightclub. Results from an early version of this system have been tested on an audience of patrons in a London nightclub, with very favourable results. Subsequent to that experiment, we designed technologies which allow the hpDJ system to monitor the responses of crowds of dancers/listeners, so that hpDJ can dynamically react to those responses from the crowd. The initial intention was that hpDJ would monitor the crowd’s reaction to the song-track currently being played, and use that response to guide its selection of subsequent song-tracks tracks in the mix. In that version, it’s assumed that all the song-tracks existed in some archive or library of pre-recorded files. However, once reliable crowd-monitoring technology is available, it becomes possible to use the crowd-response data to dynamically “remix” existing song-tracks (i.e, alter the track in some way, tailoring it to the response of the crowd) and even to dynamically “compose” new song-tracks suited to that crowd. Thus, the music played by hpDJ to any particular crowd of listeners on any particular night becomes a direct function of that particular crowd’s particular responses on that particular night. On a different night, the same crowd of people might react in a different way, leading hpDJ to create different music. Thus, the music composed and played by hpDJ could be viewed as an “emergent” property of the dynamic interaction between the computer system and the crowd, and the crowd could then be viewed as having collectively collaborated on composing the music that was played on that night. This en masse collective composition raises some interesting legal issues regarding the ownership of the composition (i.e.: who, exactly, is the author of the work?), but revenue-generating businesses can nevertheless plausibly be built from such technologies

    Analog Violin Audio Synthesizer

    Get PDF
    Abstract In the past decade, music electronics have almost completely shifted from analog to digital technology. Digital keyboards and effects provide more sound capabilities than their analog predecessors, while also reducing size and cost. However, many musicians still prefer analog instruments due to the perception that they produce superior sound quality. Many musicians spend extra money and accommodate the extra space required for analog technologies instead of digital. Furthermore, audio synthesizers are commonly controlled with the standard piano keyboard interface. Many musicians can perform sufficiently on a keyboard, but requiring a specific skill set limits the size of the market for a product. Also, when reproducing instruments such as a violin, a keyboard will not suffice in simulating a controllable vibrato from a fretless fingerboard. There is a need for an interface that allows the user to successfully reproduce the sound of the desired instrument. The violin is just one example of instruments that cannot be completely reproduced on a keyboard. For example, cellos, trombones and slide guitars all have features that a keyboard cannot simulate in real time. The Analog Violin Synthesizer uses oscillators and analog technology to reproduce the sound of a violin. The user controls the synthesizer with a continuous touch sensor, representing the fretless violin fingerboard. The continuous interface allows for a violin sound played as a standard note, or a warmer sound with adjustable vibrato, based on how the user moves his or her hand. This product provides an innovation and next step to the use of analog technology in sound synthesis. However, as digital technology continues to improve, this product could potentially cross over into digital, with the continued use of the touch interface. Currently, there are products that utilize touch input, however they are often used for sound effects, and atmospheric sounds. Rarely are they used to allow for the digital playability of a synthesized acoustic instrument

    Toward alive art

    Get PDF
    Electronics is about to change the idea of art and drastically so. We know this is going to happen - we can feel it. Much less clear to most of us are the hows, whens and whys of the change. In this paper, we will attempt to analyze the mechanisms and dynamics of the coming cultural revolution, focusing on the «artistic space» where the revolution is taking place, on the interactions between the artistic act and the space in which the act takes place and on the way in which the act modifies the space and the space the act. We briefly discuss the new category of «electronic artists». We then highlight what we see as the logical process connecting the past, the present and our uncertain future. We examine the relationship between art and previous technologies, pointing to the evolutionary, as well as the revolutionary impact of new means of expression. Against this background we propose a definition for what we call «Alive Art», going on to develop a tentative profile of the performers (the «Alivers»). In the last section, we describe two examples of Alive Artworks, pointing out the central role of what we call the "Alive Art Effect" in which we can perceive relative independence of creation from the artist and thus it may seem that unique creative role of artist is not always immediate and directly induced by his/her activity. We actually, emphasized that artist's activities may result in unpredictable processes more or less free of the artist's will

    Electronics, music and computers

    Get PDF
    technical reportElectronic and computer technology has had and will continue to have a marked effect in the field of music. Through the years scientists, engineers, and musicians have applied available technology to new musical instruments, innovative musical sound production, sound analysis, and musicology. At the University of Utah we have designed and are implementing a communication network involving and electronic organ and a small computer to provide a tool to be used in music performance, the learning of music theory, the investigation of music notation, the composition of music, the perception of music, and the printing of music

    Reduction in Computer Music:Bodies, Temporalities, and Generative Computation

    Get PDF
    In the age of pervasive computing the way our body interacts with reality needs to be reconceptualized. The reduction of embodiment is a problem for computer music since this music relies heavily on different layers of (digital) technology and mediation in order to be produced and performed. The article shows that such a mediation should not be conceived of as an obstacle but rather as a constitutive element of a permanent, complex negotiation between the artist, the machinery, and the audience, aimed at shaping a different temporality for musical language (as the Italian artist Caterina Barbieri develops).Federica Buongiorno, ‘Reduction in Computer Music: Bodies, Temporalities, and Generative Computation’, in The Case for Reduction, ed. by Christoph F. E. Holzhey and Jakob Schillinger, Cultural Inquiry, 25 (Berlin: ICI Berlin Press, 2022), pp. 175-90 <https://doi.org/10.37050/ci-25_09
    • 

    corecore