12 research outputs found

    Development and exploration of a timbre space representation of audio

    Get PDF
    Sound is an important part of the human experience and provides valuable information about the world around us. Auditory human-computer interfaces do not have the same richness of expression and variety as audio in the world, and it has been said that this is primarily due to a lack of reasonable design tools for audio interfaces.There are a number of good guidelines for audio design and a strong psychoacoustic understanding of how sounds are interpreted. There are also a number of sound manipulation techniques developed for computer music. This research takes these ideas as the basis for an audio interface design system. A proof-of-concept of this system has been developed in order to explore the design possibilities allowed by the new system.The core of this novel audio design system is the timbre space. This provides a multi-dimensional representation of a sound. Each sound is represented as a path in the timbre space and this path can be manipulated geometrically. Several timbre spaces are compared to determine which amongst them is the best one for audio interface design. The various transformations available in the timbre space are discussed and the perceptual relevance of two novel transformations are explored by encoding "urgency" as a design parameter.This research demonstrates that the timbre space is a viable option for audio interface design and provides novel features that are not found in current audio design systems. A number of problems with the approach and some suggested solutions are discussed. The timbre space opens up new possibilities for audio designers to explore combinations of sounds and sound design based on perceptual cues rather than synthesiser parameters

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Third International Conference on Technologies for Music Notation and Representation TENOR 2017

    Get PDF
    The third International Conference on Technologies for Music Notation and Representation seeks to focus on a set of specific research issues associated with Music Notation that were elaborated at the first two editions of TENOR in Paris and Cambridge. The theme of the conference is vocal music, whereas the pre-conference workshops focus on innovative technological approaches to music notation

    Methods and Technologies for the Analysis and Interactive Use of Body Movements in Instrumental Music Performance

    Get PDF
    List of related publications: http://www.federicovisi.com/publications/A constantly growing corpus of interdisciplinary studies support the idea that music is a complex multimodal medium that is experienced not only by means of sounds but also through body movement. From this perspective, musical instruments can be seen as technological objects coupled with a repertoire of performance gestures. This repertoire is part of an ecological knowledge shared by musicians and listeners alike. It is part of the engine that guides musical experience and has a considerable expressive potential. This thesis explores technical and conceptual issues related to the analysis and creative use of music-related body movements in instrumental music performance. The complexity of this subject required an interdisciplinary approach, which includes the review of multiple theoretical accounts, quantitative and qualitative analysis of data collected in motion capture laboratories, the development and implementation of technologies for the interpretation and interactive use of motion data, and the creation of short musical pieces that actively employ the movement of the performers as an expressive musical feature. The theoretical framework is informed by embodied and enactive accounts of music cognition as well as by systematic studies of music-related movement and expressive music performance. The assumption that the movements of a musician are part of a shared knowledge is empirically explored through an experiment aimed at analysing the motion capture data of a violinist performing a selection of short musical excerpts. A group of subjects with no prior experience playing the violin is then asked to mime a performance following the audio excerpts recorded by the violinist. Motion data is recorded, analysed, and compared with the expert’s data. This is done both quantitatively through data analysis xii as well as qualitatively by relating the motion data to other high-level features and structures of the musical excerpts. Solutions to issues regarding capturing and storing movement data and its use in real-time scenarios are proposed. For the interactive use of motion-sensing technologies in music performance, various wearable sensors have been employed, along with different approaches for mapping control data to sound synthesis and signal processing parameters. In particular, novel approaches for the extraction of meaningful features from raw sensor data and the use of machine learning techniques for mapping movement to live electronics are described. To complete the framework, an essential element of this research project is the com- position and performance of études that explore the creative use of body movement in instrumental music from a Practice-as-Research perspective. This works as a test bed for the proposed concepts and techniques. Mapping concepts and technologies are challenged in a scenario constrained by the use of musical instruments, and different mapping ap- proaches are implemented and compared. In addition, techniques for notating movement in the score, and the impact of interactive motion sensor systems in instrumental music practice from the performer’s perspective are discussed. Finally, the chapter concluding the part of the thesis dedicated to practical implementations describes a novel method for mapping movement data to sound synthesis. This technique is based on the analysis of multimodal motion data collected from multiple subjects and its design draws from the theoretical, analytical, and practical works described throughout the dissertation. Overall, the parts and the diverse approaches that constitute this thesis work in synergy, contributing to the ongoing discourses on the study of musical gestures and the design of interactive music systems from multiple angles

    Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference

    Creating music by listening

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (p. 127-139).Machines have the power and potential to make expressive music on their own. This thesis aims to computationally model the process of creating music using experience from listening to examples. Our unbiased signal-based solution models the life cycle of listening, composing, and performing, turning the machine into an active musician, instead of simply an instrument. We accomplish this through an analysis-synthesis technique by combined perceptual and structural modeling of the musical surface, which leads to a minimal data representation. We introduce a music cognition framework that results from the interaction of psychoacoustically grounded causal listening, a time-lag embedded feature representation, and perceptual similarity clustering. Our bottom-up analysis intends to be generic and uniform by recursively revealing metrical hierarchies and structures of pitch, rhythm, and timbre. Training is suggested for top-down un-biased supervision, and is demonstrated with the prediction of downbeat. This musical intelligence enables a range of original manipulations including song alignment, music restoration, cross-synthesis or song morphing, and ultimately the synthesis of original pieces.by Tristan Jehan.Ph.D

    Lensless Single-shot Pixel Super-resolution Phase Microscopy

    Get PDF
    This doctoral thesis is dedicated to the development of a mini phase microscope capable of subpixel resolution without using lenses. Instead of lenses, a custom-made diffractive element, a small-pixeled binary phase mask is used along with state-of-the-art phase retrieval-based algorithms to achieve high-resolution reconstruction of complex-valued objects. In phase retrieval, the test object is illuminated by coherent light source, which changes depending on the object’s characteristics. From this diffracted intensity pattern captured by the sensor a complex wavefront is reconstructable. Although, it is an ill-posed problem, since the traditional sensors can capture only the intensity of the light radiation, while the phase is in invisible range. Moreover, the scheme is in-line therefore, the pattern will result in several overlapping diffraction orders. The overlapping images and the ill-posedness make the reconstruction from the diffraction patterns challenging. Most methods provided by the literature try to solve this problem by registering several decorrelated diffraction patterns on the sensor – with a so-called multi-exposure method. Contrary to those techniques, the first contribution of the thesis is the development of a single-exposure approach for phase retrieval using lensless wavefront modulation. The modulation is achieved by a single random binary phase mask positioned between the object and the sensor. The second contribution of the thesis is an optical system design for the proposed algorithm to achieve super-resolution reconstruction. The system design includes the investigation of the system parameters, like distance tuning, noise-influence analysis, and modulation mask selection. The third contribution of the thesis is a novel approach based on wavefront separation to further reduce the appearing noises and enhance the resolving power. We computationally separated the carrying and objects’ wavefronts, which was not considered before. The performance of the novel approach and algorithm is demonstrated in simulations and physical experiments. We report an experimental computational super-resolution of 2um lines of USAF phase target, which is 3.45x smaller than the resolution following from the Nyquist-Shannon sampling theorem for used camera pixel size 3.45um. To the best of our knowledge, the 2um resolution is beyond the state-of-the-art resolution for single-shot phase retrieval techniques reported so far, even with lens configurations

    Effets audionumériques adaptatifs : théorie, mise en œuvre et usage en création musicale numérique.

    No full text
    Présidente : Myriam Desainte-Catherine, LABRI, Université Bordeaux 1 Rapporteurs : Philippe Depalle, SPCL, Université McGill, Montréal (CANADA) Xavier Serra MTG, Université Pompeu Fabre, Barcelone (ESPAGNE) Invités : Emmanuel Favreau, INA-GRM, Paris Patrick Boussard, GENESIS S.A., Aix-en-ProvenceThis PhD thesis addresses the theory, the implementation and the musical use of adaptive digital audio effects. In the first part, we situate the subject in the context of sound transformations. There exist a great number of signal processing techniques that complete each other and provide a complete set of algorithms for sound transformations. These transformations are applied according to the sound perceptive dimensions, namely dynamics, duration, pitch, spatialisation and timbre. For some effects, the control evolves in an automatic or periodic way, and this control is integrated to the algorithm. The control let to the user is about some parameters of the algorithm. It is given by real controllers, such as knobs, switches, or by virtual controllers, such as the graphical interfaces on computer screens. A main interest in sound synthesis today is the mapping: the topic is to find how we can map the gesture transducer data to the parameters of the synthesis algorithm. Our study is situated at the intersection between digital audio effects, adaptive and gestural control, and sound features. In the second part, we present adaptive digital audio effects, in the way we formalised and developed them. These effects have their controls automated according to sound features. We studied and used a lot of processing algorithms, some in real-time and some out of real-time. We improved them in order to use varying control values. A reflexion was carried out in order to choose a meaningful classification to the musician: the perceptive taxonomy. In parallel, we studied sound features and descriptors, and the ways to control an effect, by the sound and by gestures. We brought together numerous sound features that are used in psycho-acoustics, for analysis-synthesis, for sound segmentation, for sound classification and retrieval, and for automatic transcription of music. We propose a generalised control for adaptive effects, structured with two levels. The first control level is the adaption level: sound features control the effect with mapping functions. We give a set of warping functions (non-linear transfer functions) allowing transformations of the evolution of sound feature curves; we also give feature combination functions and specific warping functions used to warp a control curve according to specific rules. The second control level is the gesture control, which is applied onto the mapping functions between sound features and controls, during combination or during specific warping. This study provides a generalisation of the control of digital audio effects, as well as the conception of toolboxes for composition, and their use in musical context. Numerous experiments and sound examples have been made, among which an adaptive spatialisation controlled by a dancer, and an adaptive stereophonic equaliser. The experiments confirm the interest of such an adaptive and gestural control, for example to change expressiveness of a musical sentence, or to create new sounds.Ce travail de thèse porte sur la théorie, la mise en œuvre et les applications musicales des effets audionumériques adaptatifs. Dans la première partie, nous plaçons le sujet dans le contexte des transformations sonores. Un grand nombre de techniques de traitement du signal sonore numérique se complètent et fournissent un ensemble d'algorithmes permettant de transformer le son. Ces transformations sont appliquées selon les dimensions perceptives du son musical, à savoir la dynamique, la durée, la hauteur, la spatialisation et le timbre. Pour quelques effets, les contrôles évoluent de manière automatique ou périodique, et ce contrôle est intégré à l'algorithme. Le contrôle offert à l'utilisateur porte sur les valeurs de certains paramètres de l'algorithme. Il se réalise à l'aide de contrôleurs réels, tels des potentiomètres, des interrupteurs, ou à l'aide de contrôleurs virtuels, telles les interfaces graphiques sur écran d'ordinateur. En synthèse sonore, l'un des sujets majeurs d'étude à l'heure actuelle est le mapping : il s'agit de savoir comment mettre en correspondance les paramètres d'un contrôleur gestuel et les paramètres d'un algorithme de synthèse. Notre étude se situe à l'intersection entre les effets audionumériques, le contrôle adaptatif et gestuel, et la description de contenu sonore. Dans la seconde partie, nous présentons les effets audionumériques adaptatifs tels que nous les avons formalisés et développés. Ce sont des effets dont le contrôle est automatisé en fonction de descripteurs sonores. Nous avons étudié puis utilisé de nombreux algorithmes de traitement, certains en temps-réel et d'autres hors temps-réel. Nous les avons améliorés afin de permettre l'utilisation de valeurs de contrôle variables. Une réflexion a été menée pour choisir une classification des effets qui ait du sens pour le musicien ; elle a logiquement abouti à la taxonomie perceptive. Parallèlement, nous avons étudié les descripteurs sonores et les moyens de contrôle d'un effet, par le son et par le geste. Nous avons rassemblé de nombreux descripteurs sonores, utilisés en psychoacoustique, en analyse-synthèse, pour la segmentation et la classification d'extraits sonores, et pour la transcription automatique de partition. Nous proposons un contrôle généralisé pour les effets adaptatifs, hiérarchisé en deux niveaux. Le premier niveau de contrôle est le niveau d'adaptation : le contrôle de l'effet est effectué par des descripteurs du son, à l'aide de fonctions de mapping. Nous indiquons des fonctions de conformation (fonctions de transfert non linéaires) permettant de transformer la courbe d'évolution temporelle d'un descripteur, des fonctions de combinaisons des descripteurs ainsi que des fonctions de conformations spécifiques des paramètres de contrôle. Le second niveau de contrôle est celui du contrôle gestuel : le geste agit sur les fonctions de mapping, soit sur la combinaison, soit sur la conformation spécifique des contrôles. De cette étude, il ressort non seulement une généralisation du contrôle des effets audionumériques, mais aussi la réalisation d'outils pour la composition, et leur utilisation en situation musicale. De nombreuses expériences et illustrations sonores ont été réalisées, parmi lesquelles une spatialisation adaptative contrôlée par une danseuse, et un équalisateur stéréophonique adaptatif. Les expériences confirment l'intérêt d'un tel contrôle adaptatif et gestuel, notamment pour modifier l'expressivité d'une phrase musicale, ou pour créer des sons inouïs

    Applications of axial and radial compressor dynamic system modeling

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, February 2001.Includes bibliographical references (p. 255-262).The presented work is a compilation of four different projects related to axial and centrifugal compression systems. The projects are related by the underlying dynamic system modeling approach that is common in all of them. Two types of models are introduced, suitable for modeling the dynamic behavior of axial and centrifugal compression systems: a compact single semi-actuator disk model, Model I, and a new modular multi semi-actuator disk model, Model II. The first project analyzes aerodynamically induced whirling forces in axial-flow compressors and a new unsteady low order model is introduced to predict the destabilizing whirling forces. The model consists of two parts: compressor Model I with the effect of tip-clearance induced distortion, and an aerodynamically induced force model. The modeling results are compared to experimental data obtained from the GE Aircraft Engines test program on compressor whirl. Previously outstanding whirl-instability issues are resolved, including prediction of the direction and magnitude of rotor whirl-inducing forces; such issues are important in the design of modern axial-flow compressors.(cont.) Additional insight is gained from the model on the effects of forced rotor whirl. In particular, a non-dimensional parameter is deduced that determines the direction of rotor whirl tendency in both compressors and turbines due to tangential blade loading forces. The second project is a first-of-a-kind feasibility study of an active stall control experiment with a mag- netic bearing servo-actuator in the NASA Glenn high-speed single-stage compressor test facility. Together with CFD and experimental data the tip-clearance sensitive compressor Model I was used in a stochastic estimation and control analysis to determine the required magnetic bearing performance for compressor stall control. A magnetic bearing servo-actuator was designed that fulfilled the performance specifications, setting a milestone in magnetic bearing development for aero-engine applications. Control laws were then developed to stabilize the compressor shaft. In a second control loop, a constant gain controller was imple- mented to stabilize rotating stall. A detailed closed loop simulation at 100% corrected design speed resulted in a 2.3% reduction of stalling mass flow which is comparable to results obtained in the same compressor using unsteady air injection.(cont.) The third project is the investigation of unsteady impeller-diffuser interaction effects on compressor stability. First, the unsteady blade-row interaction in axial compressors is analyzed using Model II. The results reveal a new signature of pre-stall waves that travel backward, altering the system dynamics when rotor and stator are moderately coupled. The physical mechanism for this behavior is explained from first principles and a coupling criterion is presented. The theory is then applied to centrifugal compressors and in particular to the NASA CC3 high-speed centrifugal compressor, in which experiments are conducted to verify the model predictions. The measurements show the predicted behavior and confirm the existence of backward traveling stall pre-cursors. The fourth project is an experimental demonstration of stability enhancement in the NASA CC3 high-speed centrifugal compressor with air injection. Based ...by Zoltán Spakovszky.Ph.D
    corecore