1,957 research outputs found
Unifying Amplitude and Phase Analysis: A Compositional Data Approach to Functional Multivariate Mixed-Effects Modeling of Mandarin Chinese
Mandarin Chinese is characterized by being a tonal language; the pitch (or
) of its utterances carries considerable linguistic information. However,
speech samples from different individuals are subject to changes in amplitude
and phase which must be accounted for in any analysis which attempts to provide
a linguistically meaningful description of the language. A joint model for
amplitude, phase and duration is presented which combines elements from
Functional Data Analysis, Compositional Data Analysis and Linear Mixed Effects
Models. By decomposing functions via a functional principal component analysis,
and connecting registration functions to compositional data analysis, a joint
multivariate mixed effect model can be formulated which gives insights into the
relationship between the different modes of variation as well as their
dependence on linguistic and non-linguistic covariates. The model is applied to
the COSPRO-1 data set, a comprehensive database of spoken Taiwanese Mandarin,
containing approximately 50 thousand phonetically diverse sample contours
(syllables), and reveals that phonetic information is jointly carried by both
amplitude and phase variation.Comment: 49 pages, 13 figures, small changes to discussio
Efficient, end-to-end and self-supervised methods for speech processing and generation
Deep learning has affected the speech processing and generation fields in many directions. First, end-to-end architectures allow the direct injection and synthesis of waveform samples. Secondly, the exploration of efficient solutions allow to implement these systems in computationally restricted environments, like smartphones. Finally, the latest trends exploit audio-visual data with least supervision. In this thesis these three directions are explored.
Firstly, we propose the use of recent pseudo-recurrent structures, like self-attention models and quasi-recurrent networks, to build acoustic models for text-to-speech. The proposed system, QLAD, turns out to synthesize faster on CPU and GPU than its recurrent counterpart whilst preserving the good synthesis quality level, which is competitive with state of the art vocoder-based models.
Then, a generative adversarial network is proposed for speech enhancement, named SEGAN. This model works as a speech-to-speech conversion system in time-domain, where a single inference operation is needed for all samples to operate through a fully convolutional structure. This implies an increment in modeling efficiency with respect to other existing models, which are auto-regressive and also work in time-domain. SEGAN achieves prominent results in noise supression and preservation of speech naturalness and intelligibility when compared to the other classic and deep regression based systems. We also show that SEGAN is efficient in transferring its operations to new languages and noises. A SEGAN trained for English performs similarly to this language on Catalan and Korean with only 24 seconds of adaptation data. Finally, we unveil the generative capacity of the model to recover signals from several distortions. We hence propose the concept of generalized speech enhancement. First, the model proofs to be effective to recover voiced speech from whispered one. Then the model is scaled up to solve other distortions that require a recomposition of damaged parts of the signal, like extending the bandwidth or recovering lost temporal sections, among others. The model improves by including additional acoustic losses in a multi-task setup to impose a relevant perceptual weighting on the generated result. Moreover, a two-step training schedule is also proposed to stabilize the adversarial training after the addition of such losses, and both components boost SEGAN's performance across distortions.Finally, we propose a problem-agnostic speech encoder, named PASE, together with the framework to train it. PASE is a fully convolutional network that yields compact representations from speech waveforms. These representations contain abstract information like the speaker identity, the prosodic features or the spoken contents. A self-supervised framework is also proposed to train this encoder, which suposes a new step towards unsupervised learning for speech processing. Once the encoder is trained, it can be exported to solve different tasks that require speech as input. We first explore the performance of PASE codes to solve speaker recognition, emotion recognition and speech recognition. PASE works competitively well compared to well-designed classic features in these tasks, specially after some supervised adaptation. Finally, PASE also provides good descriptors of identity for multi-speaker modeling in text-to-speech, which is advantageous to model novel identities without retraining the model.L'aprenentatge profund ha afectat els camps de processament i generació de la parla en vàries direccions. Primer, les arquitectures fi-a-fi permeten la injecció i síntesi de mostres temporals directament. D'altra banda, amb l'exploració de solucions eficients permet l'aplicació d'aquests sistemes en entorns de computació restringida, com els telèfons intel·ligents. Finalment, les darreres tendències exploren les dades d'àudio i veu per derivar-ne representacions amb la mínima supervisió. En aquesta tesi precisament s'exploren aquestes tres direccions. Primer de tot, es proposa l'ús d'estructures pseudo-recurrents recents, com els models d’auto atenció i les xarxes quasi-recurrents, per a construir models acústics text-a-veu. Així, el sistema QLAD proposat en aquest treball sintetitza més ràpid en CPU i GPU que el seu homòleg recurrent, preservant el mateix nivell de qualitat de síntesi, competitiu amb l'estat de l'art en models basats en vocoder. A continuació es proposa un model de xarxa adversària generativa per a millora de veu, anomenat SEGAN. Aquest model fa conversions de veu-a-veu en temps amb una sola operació d'inferència sobre una estructura purament convolucional. Això implica un increment en l'eficiència respecte altres models existents auto regressius i que també treballen en el domini temporal. La SEGAN aconsegueix resultats prominents d'extracció de soroll i preservació de la naturalitat i la intel·ligibilitat de la veu comparat amb altres sistemes clàssics i models regressius basats en xarxes neuronals profundes en espectre. També es demostra que la SEGAN és eficient transferint les seves operacions a nous llenguatges i sorolls. Així, un model SEGAN entrenat en Anglès aconsegueix un rendiment comparable a aquesta llengua quan el transferim al català o al coreà amb només 24 segons de dades d'adaptació. Finalment, explorem l'ús de tota la capacitat generativa del model i l’apliquem a recuperació de senyals de veu malmeses per vàries distorsions severes. Això ho anomenem millora de la parla generalitzada. Primer, el model demostra ser efectiu per a la tasca de recuperació de senyal sonoritzat a partir de senyal xiuxiuejat. Posteriorment, el model escala a poder resoldre altres distorsions que requereixen una reconstrucció de parts del senyal que s’han malmès, com extensió d’ample de banda i recuperació de seccions temporals perdudes, entre d’altres. En aquesta última aplicació del model, el fet d’incloure funcions de pèrdua acústicament rellevants incrementa la naturalitat del resultat final, en una estructura multi-tasca que prediu característiques acústiques a la sortida de la xarxa discriminadora de la nostra GAN. També es proposa fer un entrenament en dues etapes del sistema SEGAN, el qual mostra un increment significatiu de l’equilibri en la sinèrgia adversària i la qualitat generada finalment després d’afegir les funcions acústiques. Finalment, proposem un codificador de veu agnòstic al problema, anomenat PASE, juntament amb el conjunt d’eines per entrenar-lo. El PASE és un sistema purament convolucional que crea representacions compactes de trames de veu. Aquestes representacions contenen informació abstracta com identitat del parlant, les característiques prosòdiques i els continguts lingüístics. També es proposa un entorn auto-supervisat multi-tasca per tal d’entrenar aquest sistema, el qual suposa un avenç en el terreny de l’aprenentatge no supervisat en l’àmbit del processament de la parla. Una vegada el codificador esta entrenat, es pot exportar per a solventar diferents tasques que requereixin tenir senyals de veu a l’entrada. Primer explorem el rendiment d’aquest codificador per a solventar tasques de reconeixement del parlant, de l’emoció i de la parla, mostrant-se efectiu especialment si s’ajusta la representació de manera supervisada amb un conjunt de dades d’adaptació
Efficient, end-to-end and self-supervised methods for speech processing and generation
Deep learning has affected the speech processing and generation fields in many directions. First, end-to-end architectures allow the direct injection and synthesis of waveform samples. Secondly, the exploration of efficient solutions allow to implement these systems in computationally restricted environments, like smartphones. Finally, the latest trends exploit audio-visual data with least supervision. In this thesis these three directions are explored.
Firstly, we propose the use of recent pseudo-recurrent structures, like self-attention models and quasi-recurrent networks, to build acoustic models for text-to-speech. The proposed system, QLAD, turns out to synthesize faster on CPU and GPU than its recurrent counterpart whilst preserving the good synthesis quality level, which is competitive with state of the art vocoder-based models.
Then, a generative adversarial network is proposed for speech enhancement, named SEGAN. This model works as a speech-to-speech conversion system in time-domain, where a single inference operation is needed for all samples to operate through a fully convolutional structure. This implies an increment in modeling efficiency with respect to other existing models, which are auto-regressive and also work in time-domain. SEGAN achieves prominent results in noise supression and preservation of speech naturalness and intelligibility when compared to the other classic and deep regression based systems. We also show that SEGAN is efficient in transferring its operations to new languages and noises. A SEGAN trained for English performs similarly to this language on Catalan and Korean with only 24 seconds of adaptation data. Finally, we unveil the generative capacity of the model to recover signals from several distortions. We hence propose the concept of generalized speech enhancement. First, the model proofs to be effective to recover voiced speech from whispered one. Then the model is scaled up to solve other distortions that require a recomposition of damaged parts of the signal, like extending the bandwidth or recovering lost temporal sections, among others. The model improves by including additional acoustic losses in a multi-task setup to impose a relevant perceptual weighting on the generated result. Moreover, a two-step training schedule is also proposed to stabilize the adversarial training after the addition of such losses, and both components boost SEGAN's performance across distortions.Finally, we propose a problem-agnostic speech encoder, named PASE, together with the framework to train it. PASE is a fully convolutional network that yields compact representations from speech waveforms. These representations contain abstract information like the speaker identity, the prosodic features or the spoken contents. A self-supervised framework is also proposed to train this encoder, which suposes a new step towards unsupervised learning for speech processing. Once the encoder is trained, it can be exported to solve different tasks that require speech as input. We first explore the performance of PASE codes to solve speaker recognition, emotion recognition and speech recognition. PASE works competitively well compared to well-designed classic features in these tasks, specially after some supervised adaptation. Finally, PASE also provides good descriptors of identity for multi-speaker modeling in text-to-speech, which is advantageous to model novel identities without retraining the model.L'aprenentatge profund ha afectat els camps de processament i generació de la parla en vàries direccions. Primer, les arquitectures fi-a-fi permeten la injecció i síntesi de mostres temporals directament. D'altra banda, amb l'exploració de solucions eficients permet l'aplicació d'aquests sistemes en entorns de computació restringida, com els telèfons intel·ligents. Finalment, les darreres tendències exploren les dades d'àudio i veu per derivar-ne representacions amb la mínima supervisió. En aquesta tesi precisament s'exploren aquestes tres direccions. Primer de tot, es proposa l'ús d'estructures pseudo-recurrents recents, com els models d’auto atenció i les xarxes quasi-recurrents, per a construir models acústics text-a-veu. Així, el sistema QLAD proposat en aquest treball sintetitza més ràpid en CPU i GPU que el seu homòleg recurrent, preservant el mateix nivell de qualitat de síntesi, competitiu amb l'estat de l'art en models basats en vocoder. A continuació es proposa un model de xarxa adversària generativa per a millora de veu, anomenat SEGAN. Aquest model fa conversions de veu-a-veu en temps amb una sola operació d'inferència sobre una estructura purament convolucional. Això implica un increment en l'eficiència respecte altres models existents auto regressius i que també treballen en el domini temporal. La SEGAN aconsegueix resultats prominents d'extracció de soroll i preservació de la naturalitat i la intel·ligibilitat de la veu comparat amb altres sistemes clàssics i models regressius basats en xarxes neuronals profundes en espectre. També es demostra que la SEGAN és eficient transferint les seves operacions a nous llenguatges i sorolls. Així, un model SEGAN entrenat en Anglès aconsegueix un rendiment comparable a aquesta llengua quan el transferim al català o al coreà amb només 24 segons de dades d'adaptació. Finalment, explorem l'ús de tota la capacitat generativa del model i l’apliquem a recuperació de senyals de veu malmeses per vàries distorsions severes. Això ho anomenem millora de la parla generalitzada. Primer, el model demostra ser efectiu per a la tasca de recuperació de senyal sonoritzat a partir de senyal xiuxiuejat. Posteriorment, el model escala a poder resoldre altres distorsions que requereixen una reconstrucció de parts del senyal que s’han malmès, com extensió d’ample de banda i recuperació de seccions temporals perdudes, entre d’altres. En aquesta última aplicació del model, el fet d’incloure funcions de pèrdua acústicament rellevants incrementa la naturalitat del resultat final, en una estructura multi-tasca que prediu característiques acústiques a la sortida de la xarxa discriminadora de la nostra GAN. També es proposa fer un entrenament en dues etapes del sistema SEGAN, el qual mostra un increment significatiu de l’equilibri en la sinèrgia adversària i la qualitat generada finalment després d’afegir les funcions acústiques. Finalment, proposem un codificador de veu agnòstic al problema, anomenat PASE, juntament amb el conjunt d’eines per entrenar-lo. El PASE és un sistema purament convolucional que crea representacions compactes de trames de veu. Aquestes representacions contenen informació abstracta com identitat del parlant, les característiques prosòdiques i els continguts lingüístics. També es proposa un entorn auto-supervisat multi-tasca per tal d’entrenar aquest sistema, el qual suposa un avenç en el terreny de l’aprenentatge no supervisat en l’àmbit del processament de la parla. Una vegada el codificador esta entrenat, es pot exportar per a solventar diferents tasques que requereixin tenir senyals de veu a l’entrada. Primer explorem el rendiment d’aquest codificador per a solventar tasques de reconeixement del parlant, de l’emoció i de la parla, mostrant-se efectiu especialment si s’ajusta la representació de manera supervisada amb un conjunt de dades d’adaptació.Postprint (published version
L2 speech learning of European Portuguese /l/ and /ɾ/ by L1-Mandarin learners: experimental evidence and theoretical modelling
It has been long recognized that the poor distinction between /l/ and /ɾ/ is one
of the most perceptible characteristics in Chinese-accented Portuguese. Recent
empirical research revealed that this notorious L2 speech learning difficulty
goes beyond the confusion between two L2 categories, as L1-Mandarin learners’
acquisition of Portuguese /l/ and /ɾ/ seems to be subject to the interaction
among different prosodic positions, speech modalities and representational
levels. This thesis aims to deepen our current understanding of this L2 speech
learning process, by exploring what constrains the development of L2
phonological categories across syllable positions and how different modalities
interact during this process. To achieve this goal, both experimental tasks and
theoretical modelling were employed.
The first study of this thesis explores the role of cross-linguistic influence
and orthography on L2 category formation. In order to elicit cross-linguistic
influence directly, a delayed-imitation task was performed with L1-Mandarin
naïve listeners. This task examined how the Mandarin phonology parses the
Portuguese input ([l], [ɾ]) in intervocalic onset and in word-internal coda
position. Moreover, whether orthography plays a role during the construction
of L2 phonological representation was tested by manipulating the input types
that were given in the experiment (auditory input alone vs. auditory + written
input). Our study shows that naïve Mandarin listeners’ responses corroborated
with that of L1-Mandarin learners, suggesting that cross-linguistic influence is
responsible for the observed L2 prosodic effects. Moreover, the Mandarin [ɻ] (a
repair strategy for /ɾ/) occurred almost exclusively when the written form was
given, providing evidence for the cross-linguistic interaction between
phonological categorization and orthography during the construction of L2
categories.
In the second study, we first investigate the interaction between speech
perception and production in L2 speech learning, by examining whether the L2
deviant productions stem from misperception and whether the order of
acquisition in L2 speech perception mirrors that in production. Secondly, we
test whether L2 phonological categories remain malleable at a mid-late stage of
L2 speech learning. Two perceptual experiments were performed to test L1-Mandarin learners on their discrimination ability between the target
Portuguese form and the deviant form employed in L2 production. Expanding
on prior research, in this study, the perceptual motivation for L2 speech
difficulties was assessed in different syllable constituents (onset and coda) and
at both segmental and suprasegmental levels (structural modification). The
results demonstrate that some deviant forms observed in L2 production indeed
have a perceptual motivation ([w] for the velarised lateral; [l] and [ɾə] for the
tap), while some others cannot be attributed to misperception (deletion of
syllable-final tap). Furthermore, learners confused the intervocalic /l/ and /ɾ/
bidirectionally in perception, while in production they never misproduced the
lateral (/ɾ/ → [l], */l/ → [ɾ]), revealing a mismatch between two speech
modalities. By contrast, the order of acquisition (/ɾ/coda > /ɾ/onset) was shown to
be consistent in L2 perception and production. The correspondence and
discrepancy between the two speech modalities signal a complex relationship
between L2 speech perception and production. To assess the plasticity of L2
categories /l/ and /ɾ/, two groups of L1-Mandarin learners who differ
substantially in terms of L2 experience were recruited in the perceptual tasks.
Our study shows that both groups behaved similarly in terms of the
discrimination performance. No evidence for a role of L2 experience was found.
The implication of this null result on L2 phonological development is discussed.
The third study of the thesis aims to contribute to bridging the gap between
the L2 experimental evidence and formal theories. Adopting the Bidirectional
Phonology and Phonetics Model, we formalise some of the experimental
findings that cannot be elucidated by current L2 speech theories, namely, the
between and within-subject variation in L2 phonological categorization; the
interaction between phonological categorization and orthography during L2
category construction; and the asymmetry between L2 perception and
production.
Overall, this thesis sheds light on the complex nature of L2 phonological
acquisition and provides a formal account of how different modalities interact
in shaping L2 speech learning. Moreover, it puts forward testable predictions
for future research and suggestions for improving foreign language
teaching/training methodologies.É bem conhecido o facto de as trocas associadas a /l/ e /ɾ/ constituírem uma
das caraterísticas mais percetíveis no português articulado pelos aprendentes
chineses. Recentemente, estudos empíricos revelam que a dificuldade por parte
dos aprendentes chineses não se restringe à discriminação moderada entre as
duas categorias da L2, dado que a aquisição de /l/ e /ɾ/ do português por
aprendentes chineses parece estar sujeita à interação entre contextos
prosódicos, entre modalidades de fala e entre níveis representacionais
diferentes. Esta tese visa aprofundar a nossa compreensão deste processo da
aquisição fonológica L2, explorando o que condiciona o desenvolvimento das
categorias fonológicas L2 em diferentes constituintes silábicos e de que modo
as modalidades interagem durante este processo, recorrendo para tal a tarefas
experimentais bem como a formalização teórica.
O primeiro estudo averigua o papel da influência interlinguística e o da
ortografia na construção das categorias de L2. Para elicitar a influência
interlinguística diretamente, uma tarefa de imitação retardada foi aplicada aos
falantes nativos do mandarim sem conhecimento de português, investigando
assim como a fonologia do mandarim categoriza o input do português ([l], [ɾ])
em ataque simples intervocálico e em coda medial. Para além disso, a influência
ortográfica na construção de representações fonológicas em L2 foi examinada
através da manipulação do tipo do input apresentado na experiência (input
auditivo vs. input auditivo + ortográfico). Os resultados da situação
experimental em que os participantes receberam input de ambos os tipos
replicaram o efeito prosódico observado na literatura, evidenciando a interação
entre categorização fonológica e ortografia na construção das categorias de L2.
No segundo estudo, investigamos a interação entre a perceção e a produção
de fala na aquisição das líquidas do PE por aprendentes chineses e a
plasticidade destas categorias fonológicas, respondendo às questões seguintes:
1) as produções desviantes de L2 resultam da perceção incorreta? 2) a ordem
da aquisição em L2 é consistente na perceção e na produção? 3) as categorias
da L2 permanecem maleáveis numa fase intermédia da aquisição? Duas tarefas
percetivas foram conduzidas para testar a capacidade percetiva dos
aprendentes nativos do mandarim em relação à discriminação entre a forma
alvo do português e as formas desviantes utilizadas na produção. No presente
estudo, a motivação percetiva das dificuldades em L2 foi testada nos constituintes silábicos diferentes (ataque simples e coda) e nos níveis segmental e suprassegmental (modificação estrutural). Os resultados demonstram que algumas formas desviantes que os aprendentes chineses produzem têm uma
motivação percetiva (i.e. [w] para a lateral velarizada; [l] e [ɾə] para a vibrante
alveolar), enquanto outras não podem ser analisadas como casos de perceção
incorreta (como é o caso do o apagamento da vibrante em coda). Para além
disso, na posição intervocálica, os aprendentes manifestam dificuldade na
discriminação entre /l/ e /ɾ/ de forma bidirecional, mas, na produção, a lateral
nunca é produzida incorretamente (/ɾ/ → [l], */l/ → [ɾ]). Tal revela uma
divergência entre as duas modalidades de fala. Por contraste, mostrou-se que a
ordem da aquisição (/ɾ/coda > /ɾ/ataque) é consistente na perceção e na produção
da L2. A correspondência e a discrepância entre as duas modalidades de fala,
sinalizam uma relação complexa entre a perceção e a produção na aquisição
fonológica de L2. Em relação à questão da plasticidade das categorias de L2,
recrutaram-se para as tarefas percetivas dois grupos de aprendentes nativos do
mandarim que se diferenciavam substancialmente em termos da experiência
em L2. Não se encontrou um efeito significativo da experiência da L2. A
implicação deste resultado nulo no desenvolvimento fonológico de L2 foi
discutida.
O terceiro estudo desta tese tem como objetivo contribuir para a
colmatação das lacunas entre estudos empíricos de L2 e as teorias formais.
Adotando o Modelo Bidirecional de Fonologia e Fonética, formalizamos os
resultados experimentais que as teorias atuais da aquisição fonológica de L2
não conseguem explicar, nomeadamente, a variação inter e intra-sujeitos na
categorização fonológica em L2; a interação entre categorização fonológica e
ortografia na construção das categorias na L2; a assimetria entre a perceção e a
produção na L2.
Em suma, esta tese contribui com dados empíricos para a discussão da
relação complexa entre a perceção, produção e ortografia na aquisição
fonológica de L2 e formaliza a interação entre essas modalidades através de um
modelo linguístico generativo. Além disso, apresentam-se predições testáveis
para investigação futura e sugestões para o aperfeiçoamento das metodologias
de ensino/treino da língua não materna
Detección automática de la enfermedad de Parkinson usando componentes moduladoras de señales de voz
Parkinson’s Disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease. This disorder mainly affects older adults at a rate of about 2%, and about 89% of people diagnosed with PD also develop speech disorders. This has led scientific community to research information embedded in speech signal from Parkinson’s patients, which has allowed not only a diagnosis of the pathology but also a follow-up of its evolution. In recent years, a large number of studies have focused on the automatic detection of pathologies related to the voice, in order to make objective evaluations of the voice in a non-invasive manner. In cases where the pathology primarily affects the vibratory patterns of vocal folds such as Parkinson’s, the analyses typically performed are sustained over vowel pronunciations. In this article, it is proposed to use information from slow and rapid variations in speech signals, also known as modulating components, combined with an effective dimensionality reduction approach that will be used as input to the classification system. The proposed approach achieves classification rates higher than 88 %, surpassing the classical approach based on Mel Cepstrals Coefficients (MFCC). The results show that the information extracted from slow varying components is highly discriminative for the task at hand, and could support assisted diagnosis systems for PD.La Enfermedad de Parkinson (EP) es el segundo trastorno neurodegenerativo más común después de la enfermedad de Alzheimer. Este trastorno afecta principalmente a los adultos mayores con una tasa de aproximadamente el 2%, y aproximadamente el 89% de las personas diagnosticadas con EP también desarrollan trastornos del habla. Esto ha llevado a la comunidad científica a investigar información embebida en las señales de voz de pacientes diagnosticados con la EP, lo que ha permitido no solo un diagnóstico de la patología sino también un seguimiento de su evolución. En los últimos años, una gran cantidad de estudios se han centrado en la detección automática de patologías relacionadas con la voz, a fin de realizar evaluaciones objetivas de manera no invasiva. En los casos en que la patología afecta principalmente los patrones vibratorios de las cuerdas vocales como el Parkinson, los análisis que se realizan típicamente sobre grabaciones de vocales sostenidas. En este artículo, se propone utilizar información de componentes con variación lenta de las señales de voz, también conocidas como componentes de modulación, combinadas con un enfoque efectivo de reducción de dimensiónalidad que se utilizará como entrada al sistema de clasificación. El enfoque propuesto logra tasas de clasificación superiores al 88 %, superando el enfoque clásico basado en los Coeficientes Cepstrales de Mel (MFCC). Los resultados muestran que la información extraída de componentes que varían lentamente es altamente discriminatoria para el problema abordado y podría apoyar los sistemas de diagnóstico asistido para EP
Models and Analysis of Vocal Emissions for Biomedical Applications
The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the newborn to the adult and elderly. Over the years the initial issues have grown and spread also in other fields of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years in Firenze, Italy. This edition celebrates twenty-two years of uninterrupted and successful research in the field of voice analysis
- …