24 research outputs found
Chorus Digitalis: polyphonic gestural singing
Chorus Digitalis is a choir of gesture controlled digital singers. Chorus Digitalis is based on Cantor Digitalis, a gesture controlled singing voice synthesizer, and the Méta-Mallette, an environment designed for collective electronic music and video performances. Cantor Digitalis is an improved formant synthesizer, using the RT-CALM voice source model and source-filter interaction mechanisms. Chorus Digitalis is the result of the integration of voice synthesis in the Méta-Mallette environment. Each virtual voice is controlled by both a graphic tablet and a joystick. Polyphonic singing performances of Chorus Digitalis with four players will be given at the conference. The Méta-Mallette and Cantor Digitalis are implemented using Max/MSP
Drawing melodies: Evaluation of chironomic singing synthesis
Cantor Digitalis, a real-time formant synthesizer controlled by a graphic tablet and a stylus, is used for assessment of melodic precision and accuracy in singing synthesis. Melodic accuracy and precision are measured in 3 experiments for groups of 20 and 28 subjects. The task of the subjects is to sing musical intervals and short melodies, at various tempi, using chironomy (hand-controlled singing), mute chironomy (without audio feedback) and their own voices. The results show the high accuracy and precision obtained by all the subjects for chironomic control of singing synthesis. Some subjects performed significantly better in chironomic singing compared to natural singing, although other subjects showed comparable proficiency. For the chironomic condition, mean note accuracy is less than 12 cents and mean interval accuracy is less than 25 cents for all the subjects. Comparing chironomy and mute chironomy shows that the skills used for writing and drawing are used for chironomic singing, but that the audio feedback helps in interval accuracy. Analysis of blind chironomy (without visual reference) indicates that a visual feedback helps much in both note and interval accuracy and precision. This study demonstrates the capabilities of chironomy as a precise and accurate mean for controlling singing synthesis
Recommended from our members
OrJo et la Méta-Mallette 4.0
Cet article décrit le projet de recherche OrJo 2009-2012 (Orchestre de Joysticks) qui associe quatre structures, PUCE MUSE, le LAM (UPMC), le LIMSI (CNRS, associé à l’UPMC et à l'Université Paris-Sud), et 3Dlized, autour de quatre grands objectifs :
1. réaliser quatre versions du logiciel plateforme pour s'adapter aux différents usages,
2. proposer une collection d’instruments virtuels sonores et visuels,
3. améliorer la représentation graphique des instruments virtuels,
4. pratiquer, échanger, et conserver un répertoire sur la Méta-Librairie.
Ce projet interroge plusieurs usages nouveaux comme la pratique en orchestre d'instruments virtuels, l'échange et la transmission de partitions interactives pour ces orchestres, l'apport du relief pour la musique visuelle. Il évoque aussi les développements d'instruments virtuels comme les instruments de synthèse vocale du LIMSI et les instruments par modèles physiques, modèles topologiques et modèles statistiques de l'équipe Lutheries acoustique musique (LAM).
Cet article est aussi une invitation à utiliser la plateforme Meta-Mallette [12] via son SDK (libre) et le site d'echanges Meta-Librairie
Contrôle gestuel de la prosodie et de la qualité vocale
Today, expressive speech generation constitutes a new challenges. A direct means to give synthesis expressive capabilities consists of controlling in real time using an operator's gestures. Our research has been conducted along two main axes : on the one hand, the modifi cation of pitch and duration of recorded speech, either natural or synthetic, and on the other hand, synthesis of vowels, based on a glottal source model. Concerning prosody modi fication, we implemented a real time algorithm capable of achieving the joint modi fication of pitch and duration of recorded speech. Thanks to our tool, we carried out experiments in order to validate and evaluate prosody modi cation. Our experimental results proved that by using manual gesture, we were capable of reproducing a given intonation accurately. Our second research axis focused on the development of a vowel synthesizer, which can modify voice quality within a perceptual space. The fi rst development consisted of the implementation of a real time version of a glottal source waveform synthesizer, together with vocal source aperiodicities. Thus, we are able to simultaneously control fundamental frequency, vocal eff ort, tension, breathiness, structural aperiodicities and laryngeal mechanisms. Particular attention was paid to the realization of the phonetogram, as well as the perceptual mappings between parameters of the model and vocal dimensions. Our study allows us to envisage in the long-term the conjoint modi fication, in real time, of prosodic components and voice quality.La synthèse de parole expressive constitue aujourd'hui un nouveau défi . Un moyen direct de rendre la synthèse expressive consiste à la contrôler en temps réel par les gestes d'un opérateur. Notre recherche s'est donc déroulée selon deux axes : d'une part la modi fication prosodique (hauteur et durée) de la parole enregistrée, et d'autre part la synthèse de voyelles à partir d'un modèle de source glottique. Concernant la modifi cation prosodique, nous avons implémenté un algorithme temps réel permettant la modifi cation conjointe de la hauteur et de la durée d'une phrase enregistrée. Grâce à cet outil, nous avons mené des expériences visant à valider la possibilité de modi fier la prosodie grâce au geste. Nos expériences ont montré la possibilité de reproduire avec précision une intonation donnée grâce au geste manuel. Le second axe de notre recherche s'est focalisé sur la réalisation d'un synthétiseur de voyelles, dont on peut modi fier la qualité vocale. La première étape a consisté à implémenter une version temps réel de la synthèse de l'onde de débit glottique, ainsi que des apériodicités de la source vocale. Nous pouvons ainsi contrôler la fréquence fondamentale, l'eff ort vocal, la tension, le souffle, les apériodicités structurelles et les di fférents mécanismes laryngés. Une attention particulière a été apportée à la réalisation du phonétogramme, ainsi qu'aux correspondances perceptives entre les paramètres du modèle et les dimensions vocales. Notre étude permet d'envisager à terme la modifi cation conjointe, interactive, des composantes prosodiques et de qualité vocale
Comparing time domain and spectral domain voice source models for gesture controlled vocal instruments
Three real-time gesture controlled vocal instruments are presented. They are based on a time domain (LF) and a spectral domain (CALM) model of the glottal pulse signal. Gestural control is able to add expression to the synthetic voices, enabling simulation of various vocal behaviors. Expressive vocal instruments are demonstrated for musical and research purposes. 1
Contrôle gestuel de la prosodie et de la qualité vocale
ORSAY-PARIS 11-BU Sciences (914712101) / SudocSudocFranceF
Issues and solutions related to real-time TD-PSOLA implementation
International audienc
Computerized chironomy : evaluation of hand-controlled intonation reiteration
International audienceno abstrac
Recommended from our members
Chorus digitalis: experiment in chironomic choir singing
This paper reports on experiments in real-time gestural control of voice synthesis. The ability of hand writing gestures for controlling singing intonation (chironomic singing synthesis) is studied. In a first part, the singing synthesizer and controller are described. The system is developed in an environment for multi-users music synthesis, allowing for synthetic choir singing. In a second part, performances of subjects playing with the system are analyzed. The results show that chironomic singers are able to control melody with accuracy, to perform vibrato, portamento and other types of fine-grained intonation variations, and to give convincing musical performances