49 research outputs found

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    In the wake of my steps

    Get PDF

    Vector synthesis: a media archaeological investigation into sound-modulated light

    Get PDF
    Vector Synthesis is a computational art project inspired by theories of media archaeology, by the history of computer and video art, and by the use of discarded and obsolete technologies such as the Cathode Ray Tube monitor. This text explores the military and techno-scientific legacies at the birth of modern computing, and charts attempts by artists of the subsequent two decades to decouple these tools from their destructive origins. Using this history as a basis, the author then describes a media archaeological, real time performance system using audio synthesis and vector graphics display techniques to investigate direct, synesthetic relationships between sound and image. Key to this system, realized in the Pure Data programming environment, is a didactic, open source approach which encourages reuse and modification by other artists within the experimental audiovisual arts community.Holzer, Dere

    BRIX₂ - A Versatile Toolkit for Rapid Prototyping and Education in Ubiquitous Computing

    Get PDF
    Zehe S. BRIX₂ - A Versatile Toolkit for Rapid Prototyping and Education in Ubiquitous Computing. Bielefeld: Universität Bielefeld; 2018

    Development of an Augmented Reality musical instrument

    Get PDF
    Nowadays, Augmented Reality and Virtual Reality are concepts of which people are becoming more and more aware of due to their application to the video-game industry (speceially in the case of VR). Such raise is partly due to a decrease in costs of Head Mounted Displays, which are consequently becoming more and more accessible to the public and developers worldwide. All of these novelties, along with the frenetic development of Information Technologies applied to essentially, all markets; have also made digital artists and manufacturers aware of the never-ending interaction possibilities these paradigms provide and a variety of systems have appeared, which offer innovative creative capabilities. Due to the personal interest of the author in music and the technologies surrounding its creation by digital means, this document covers the application of the Virtuality- Reality-Continuum (VR and AR) paradigms to the field of interfaces for the musical expression. More precisely, it covers the development of an electronic drumset which integrates Arduino-compatible hardware with a 3D visualisation application (developed based on Unity) to create a complete functioning instrument musical instrument, The system presented along the document attempts to leverage three-dimensional visual feedback with tangible interaction based on hitting, which is directly translated to sound and visuals in the sound generation application. Furthermore, the present paper provides a notably deep study of multiple technologies and areas that are ultimately applied to the target system itself. Hardware concerns, time requirements, approaches to the creation of NIMEs (New Interfaces for Musical Expression), Virtual Musical Instrument (VMI) design, musical-data transmission protocols (MIDI and OSC) and 3D modelling constitute the fundamental topics discussed along the document. At the end of this paper, conclusions reflect on the difficulties found along the project, the unfulfilled objectives and all deviations from the initial concept that the project suffered during the development process. Besides, future work paths will be listed and depicted briefly and personal comments will be included as well as humble pieces of advice targeted at readers interested in facing an ambitious project on their own.En la actualidad, los conceptos de Realidad Aumentada (AR) y Realidad Virtual (VR) son cada vez más conocidos por la gente de a pie, debido en gran parte a su aplicación al ámbito de los videojuegos, donde el desarollo para dispositivos HMDs está en auge. Esta popularidad se debe en gran parte al abaratamiento de este tipo de dispositivos, los cuales son cada vez más accesibles al público y a los desarrolladores de todo el mundo. Todas estas novedades sumadas al frenético desarrollo de la industria de IT han llamado la atención de artistas y empresas que han visto en estos paradigmas (VR and AR) una oportunidad para proporcionar nuevas e ilimitadas formas de interacción y creación de arte en alguna de sus formas. Debido al interés personal del autor de este TFG en la música y las tecnologías que posiblitan la creación musical por medios digitales, este documento explora la aplicación de los paradigmas del Virtuality-Reality Continuum de Milgram (AR y VR) al ámbito de las interfaces para la creación musical. Concretamente, este TFG detalla el desarrollo de una batería electrónica, la cual combina una interfaz tangible creada con hardware compatible con Arduino con una aplicación de generación de sonidos y visualización, desarrollada utilizando Unity como base. Este sistema persigue lograr una interacción natural por parte del usuario por medio de integrar el hardware en unas baquetas, las cuales permiten detectar golpes a cualquier tipo de superficie y convierten estos en mensajes MIDI que son utilizados por el sistema generador de sonido para proporcionar feedback al usuario (tanto visual como auditivo); por tanto, este sistema se distingue por abogar por una interacción que permita golpear físicamente objetos (e.g. una cama), mientras que otros sistemas similates basan su modo de interacción en “air-drumming”. Además, este sistema busca solventar algunos de los inconvenientes principales asociados a los baterías y su normalmente conflictivo instrumento, como es el caso de las limitaciones de espacio, la falta de flexibilidad en cuanto a los sonidos que pueden ser generados y el elevado coste del equipo. Por otro lado, este documento pormenoriza diversos aspectos relacionados con el sistema descrito en cuestión, proporcionando al lector una completa panorámica de sistemas similares al propuesto. Asimismo, se describen los aspectos más importantes en relación al desarrollo del TFG, como es el caso de protocolos de transmisión de información musical (MIDI y OSC), algoritmos de control, guías de diseño para interfaces de creación musical (NIMEs) y modelado 3D. Se incluye un íntegro proceso de Ingeniería de Software para mantener la formalidad y tratar de garantizar un desarrollo más organizado y se discute la metodología utilizada para este proceso. Por último, este documento reflexiona sobre las dificultades encontradas, se enumeran posibilidades de Trabajo Futuro y se finaliza con algunas conclusiones personales derivadas de este trabajo de investigación.Ingeniería Informátic

    Exploration in robotic musical instrument design

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.Includes bibliographical references (leaves 169-173).This thesis presents several works involving robotic musical instruments. Robots have long been used in industry for performing repetitive tasks, or jobs requiring superhuman strength. However, more recently robots have found a niche as musical instruments. The works presented here attempt to address the musicality of these instruments, their use in various settings, and the relationship of a robotic instrument to its human player in terms of mapping and translating gesture to sound. The primary project, The Chandelier, addresses both hardware and software issues, and builds directly from experience with two other works, The Marshall Field's Flower Show and Jeux Deux. The Marshall Field's Flower Show is an installation for several novel musical instruments and controllers. Presented here is a controller and mapping system for a Yamaha Disklavier player piano that allows for real-time manipulation of musical variations on famous compositions. The work is presented in the context of the exhibit, but also discussed in terms of its underlying software and technology. Jeux Deux is a concerto for hyperpiano, orchestra, and live computer graphics.(cont.) The software and mapping schema for this piece are presented in this thesis as a novel method for live interaction, in which a human player duets with a computer controlled player piano. Results are presented in the context of live performance. The Chandelier is the culmination of these past works, and presents a full-scale prototype of a new robotic instrument. This instrument explores design methodology, interaction, and the relationship-and disconnect-of a human player controlling a robotic instrument. The design of hardware and software, and some mapping schema are discussed and analyzed in terms of playability, musicality, and use in public installation and individual performance. Finally, a proof-of-concept laser harp is presented as a low-cost alternative musical controller. This controller is easily constructed from off-the-shelf parts. It is analyzed in terms of its sensing abilities and playability.Michael A. Fabio.S.M

    Engineering systematic musicology : methods and services for computational and empirical music research

    Get PDF
    One of the main research questions of *systematic musicology* is concerned with how people make sense of their musical environment. It is concerned with signification and meaning-formation and relates musical structures to effects of music. These fundamental aspects can be approached from many different directions. One could take a cultural perspective where music is considered a phenomenon of human expression, firmly embedded in tradition. Another approach would be a cognitive perspective, where music is considered as an acoustical signal of which perception involves categorizations linked to representations and learning. A performance perspective where music is the outcome of human interaction is also an equally valid view. To understand a phenomenon combining multiple perspectives often makes sense. The methods employed within each of these approaches turn questions into concrete musicological research projects. It is safe to say that today many of these methods draw upon digital data and tools. Some of those general methods are feature extraction from audio and movement signals, machine learning, classification and statistics. However, the problem is that, very often, the *empirical and computational methods require technical solutions* beyond the skills of researchers that typically have a humanities background. At that point, these researchers need access to specialized technical knowledge to advance their research. My PhD-work should be seen within the context of that tradition. In many respects I adopt a problem-solving attitude to problems that are posed by research in systematic musicology. This work *explores solutions that are relevant for systematic musicology*. It does this by engineering solutions for measurement problems in empirical research and developing research software which facilitates computational research. These solutions are placed in an engineering-humanities plane. The first axis of the plane contrasts *services* with *methods*. Methods *in* systematic musicology propose ways to generate new insights in music related phenomena or contribute to how research can be done. Services *for* systematic musicology, on the other hand, support or automate research tasks which allow to change the scope of research. A shift in scope allows researchers to cope with larger data sets which offers a broader view on the phenomenon. The second axis indicates how important Music Information Retrieval (MIR) techniques are in a solution. MIR-techniques are contrasted with various techniques to support empirical research. My research resulted in a total of thirteen solutions which are placed in this plane. The description of seven of these are bundled in this dissertation. Three fall into the methods category and four in the services category. For example Tarsos presents a method to compare performance practice with theoretical scales on a large scale. SyncSink is an example of a service

    Proceedings of the 11th Workshop on Ubiquitous Music (UbiMus 2021)

    Get PDF
    The 11th UbiMus — Ubiquitous Music Workshop (https://dei.fe.up.pt/ubimus/) was held at the Center for High Artistic Performance, the house of the Orquestra Jazz Matosinhos (OJM) in Portugal, during September 6–8, 2021. It was organized by the Sound and Music Computing (SMC) Group of the Faculty of Engineering, University of Porto and INESC TEC, Portugal, and OJM in collaboration with NAP, Federal University of Acre, Brazil. Due to mobility restrictions resulting from the Covid-19 pandemic, a hybrid format was adopted in this year’s workshop to accommodate the remote participation of delegates and authors that could not attend the workshop at Matosinhos

    Abstraction of representation in live theater

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-158).Early in Tod Machover's opera Death and the Powers, the main character, Simon Powers, is subsumed into a technological environment of his own creation. The theatrical set comes alive in the form of robotic, visual, and sonic elements that allow the actor to extend his range and influence across the stage in unique and dynamic ways. The environment must compellingly assume the behavior and expression of the absent Simon. This thesis presents a new approach called Disembodied Performance that adapts ideas from affective psychology, cognitive science, and the theatrical tradition to create a framework for thinking about the translation of stage presence. An implementation of a system informed by this methodology is demonstrated. In order to distill the essence of this character, we recover performance parameters in real-time from physiological sensors, voice, and vision systems. This system allows the offstage actor to express emotion and interact with others onstage. The Disembodied Performance approach takes a new direction in augmented performance by employing a nonrepresentational abstraction of a human presence that fully translates a character into an environment. The technique and theory presented also have broad-reaching applications outside of theater for personal expression, telepresence, and storytelling.Peter Alexander Torpey.S.M
    corecore