188,321 research outputs found
Prosody generation with a neural network
The use of neural networks in speech synthesis has been especially successful in the domain of prosody generation. The approach presented here differs from others in a) the transformation from a simple input to an output vector consisting of different parameters and b) the use of subcorpora that allow specialized networks. The network operates in a prominence-based synthesis system, where prominence is the most important parameter and is, consequently, the input parameter for the network. The output is not yet evaluated formally but the synthetic speech sounds natural and lively
A Sub-optimal Algorithm to Synthesize Control Laws for a Network of Dynamic Agents
We study the synthesis problem of an LQR controller when the matrix describing the control law is constrained to lie in a particular vector space. Our motivation is the use of such control laws to stabilize networks of autonomous agents in a decentralized fashion; with the information flow being dictated by the constraints of a pre-specified topology. In this paper, we consider the finite-horizon version of the problem and provide both a computationally intensive optimal solution and a sub-optimal solution that is computationally more tractable. Then we apply the technique to the decentralized vehicle formation control problem and show that the loss in performance due to the use of the sub-optimal solution is not huge; however the topology can have a large effect on performance
Sampling-based speech parameter generation using moment-matching networks
This paper presents sampling-based speech parameter generation using
moment-matching networks for Deep Neural Network (DNN)-based speech synthesis.
Although people never produce exactly the same speech even if we try to express
the same linguistic and para-linguistic information, typical statistical speech
synthesis produces completely the same speech, i.e., there is no
inter-utterance variation in synthetic speech. To give synthetic speech natural
inter-utterance variation, this paper builds DNN acoustic models that make it
possible to randomly sample speech parameters. The DNNs are trained so that
they make the moments of generated speech parameters close to those of natural
speech parameters. Since the variation of speech parameters is compressed into
a low-dimensional simple prior noise vector, our algorithm has lower
computation cost than direct sampling of speech parameters. As the first step
towards generating synthetic speech that has natural inter-utterance variation,
this paper investigates whether or not the proposed sampling-based generation
deteriorates synthetic speech quality. In evaluation, we compare speech quality
of conventional maximum likelihood-based generation and proposed sampling-based
generation. The result demonstrates the proposed generation causes no
degradation in speech quality.Comment: Submitted to INTERSPEECH 201
Theory and Practice of GVAR Modeling
The Global Vector Autoregressive (GVAR) approach has proven to be a very useful approach to analyze interactions in the global macroeconomy and other data networks where both the cross-section and the time dimensions are large. This paper surveys the latest developments in the GVAR modeling, examining both the theoretical foundations of the approach and its numerous empirical applications. We provide a synthesis of existing literature and highlight areas for future research
- …