7,060 research outputs found
An extended two-dimensional vocal tract model for fast acoustic simulation of single-axis symmetric three-dimensional tubes
The simulation of two-dimensional (2D) wave propagation is an affordable
computational task and its use can potentially improve time performance in
vocal tracts' acoustic analysis. Several models have been designed that rely on
2D wave solvers and include 2D representations of three-dimensional (3D) vocal
tract-like geometries. However, until now, only the acoustics of straight 3D
tubes with circular cross-sections have been successfully replicated with this
approach. Furthermore, the simulation of the resulting 2D shapes requires
extremely high spatio-temporal resolutions, dramatically reducing the speed
boost deriving from the usage of a 2D wave solver. In this paper, we introduce
an in-progress novel vocal tract model that extends the 2D Finite-Difference
Time-Domain wave solver (2.5D FDTD) by adding tube depth, derived from the area
functions, to the acoustic solver. The model combines the speed of a light 2D
numerical scheme with the ability to natively simulate 3D tubes that are
symmetric in one dimension, hence relaxing previous resolution requirements. An
implementation of the 2.5D FDTD is presented, along with evaluation of its
performance in the case of static vowel modeling. The paper discusses the
current features and limits of the approach, and the potential impact on
computational acoustics applications.Comment: 5 pages, 2 figures, Interspeech 2019 submissio
Waveguide physical modeling of vocal tract acoustics: flexible formant bandwidth control from increased model dimensionality
Digital waveguide physical modeling is often used as an efficient representation of acoustical resonators such as the human vocal tract. Building on the basic one-dimensional (1-D) Kelly-Lochbaum tract model, various speech synthesis techniques demonstrate improvements to the wave scattering mechanisms in order to better approximate wave propagation in the complex vocal system. Some of these techniques are discussed in this paper, with particular reference to an alternative approach in the form of a two-dimensional waveguide mesh model. Emphasis is placed on its ability to produce vowel spectra similar to that which would be present in natural speech, and how it improves upon the 1-D model. Tract area function is accommodated as model width, rather than translated into acoustic impedance, and as such offers extra control as an additional bounding limit to the model. Results show that the two-dimensional (2-D) model introduces approximately linear control over formant bandwidths leading to attainable realistic values across a range of vowels. Similarly, the 2-D model allows for application of theoretical reflection values within the tract, which when applied to the 1-D model result in small formant bandwidths, and, hence, unnatural sounding synthesized vowels
Real-time dynamic articulations in the 2-D waveguide mesh vocal tract model
Time domain articulatory vocal tract modeling in one-dimensional (1-D) is well established. Previous studies into two-dimensional (2-D) simulation of wave propagation in the vocal tract have shown it to present accurate static vowel synthesis. However, little has been done to demonstrate how such a model might accommodate the dynamic tract shape changes necessary in modeling speech. Two methods of applying the area function to the 2-D digital waveguide mesh vocal tract model are presented here. First, a method based on mapping the cross-sectional area onto the number of waveguides across the mesh, termed a widthwise mapping approach is detailed. Discontinuity problems associated with the dynamic manipulation of the model are highlighted. Second, a new method is examined that uses a static-shaped rectangular mesh with the area function translated into an impedance map which is then applied to each waveguide. Two approaches for constructing such a map are demonstrated; one using a linear impedance increase to model a constriction to the tract and another using a raised cosine function. Recommendations are made towards the use of the cosine method as it allows for a wider central propagational channel. It is also shown that this impedance mapping approach allows for stable dynamic shape changes and also permits a reduction in sampling frequency leading to real-time interaction with the model
Acoustic modeling using the digital waveguide mesh
The digital waveguide mesh has been an active area of music acoustics research for over ten years. Although founded in 1-D digital waveguide modeling, the principles on which it is based are not new to researchers grounded in numerical simulation, FDTD methods, electromagnetic simulation, etc. This article has attempted to provide a considerable review of how the DWM has been applied to acoustic modeling and sound synthesis problems, including new 2-D object synthesis and an overview of recent research activities in articulatory vocal tract modeling, RIR synthesis, and reverberation simulation. The extensive, although not by any means exhaustive, list of references indicates that though the DWM may have parallels in other disciplines, it still offers something new in the field of acoustic simulation and sound synth
A stabilized finite element method for the mixed wave equation in an ALE framework with application to diphthong production
The archived file is not the final published version of the article.
© (2016) S. Hirzel Verlag/European Acoustics Association
The definitive publisher-authenticated version is available online at http://www.ingentaconnect.com/contentone/dav/aaua/2016/00000102/00000001/art00012
Readers must contact the publisher for reprint or permission to use the material in any form.Working with the wave equation in mixed rather than irreducible form allows one to directly account for both, the acoustic pressure field and the acoustic particle velocity field. Indeed, this becomes the natural option in many problems, such as those involving waves propagating in moving domains, because the equations can easily be set in an arbitrary Lagrangian-Eulerian (ALE) frame of reference. Yet, when attempting a standard Galerkin finite element solution (FEM) for them, it turns out that an inf-sup compatibility constraint has to be satisfied, which prevents from using equal interpolations for the approximated acoustic pressure and velocity fields. In this work it is proposed to resort to a subgrid scale stabilization strategy to circumvent this condition and thus facilitate code implementation. As a possible application, we address the generation of diphthongs in voice production.Peer ReviewedPostprint (author's final draft
Singing synthesis with an evolved physical model
A two-dimensional physical model of the human vocal tract is described. Such a system promises increased realism and control in the synthesis. of both speech and singing. However, the parameters describing the shape of the vocal tract while in use are not easily obtained, even using medical imaging techniques, so instead a genetic algorithm (GA) is applied to the model to find an appropriate configuration. Realistic sounds are produced by this method. Analysis of these, and the reliability of the technique (convergence properties) is provided
Blind Normalization of Speech From Different Channels
We show how to construct a channel-independent representation of speech that
has propagated through a noisy reverberant channel. This is done by blindly
rescaling the cepstral time series by a non-linear function, with the form of
this scale function being determined by previously encountered cepstra from
that channel. The rescaled form of the time series is an invariant property of
it in the following sense: it is unaffected if the time series is transformed
by any time-independent invertible distortion. Because a linear channel with
stationary noise and impulse response transforms cepstra in this way, the new
technique can be used to remove the channel dependence of a cepstral time
series. In experiments, the method achieved greater channel-independence than
cepstral mean normalization, and it was comparable to the combination of
cepstral mean normalization and spectral subtraction, despite the fact that no
measurements of channel noise or reverberations were required (unlike spectral
subtraction).Comment: 25 pages, 7 figure
- …