167 research outputs found
Exchange Rates, Country Preferences, and Gold
This paper provides indirect tests of the hypothesis that exchange rate movements may be largely coterminus with changes in preferences for holding claims on different countries. It is argued that changes in country preferences will be reflected systematically in the price of gold and, hence, that gold price movements, under the maintained hypothesis, should have explanatory power with respect to exchange rate movements over and above the effects of monetary shocks. The paper applies multivariate vector autoregression and cointegration modeling techniques to test for the short- and long-run influence of gold prices on exchange rates conditional on other monetary and real macroeconomic variables, and applies the resulting error correction exchange rate equation to out-of-sample forecasting exercises.
Dynamic Control Flow in Large-Scale Machine Learning
Many recent machine learning models rely on fine-grained dynamic control flow
for training and inference. In particular, models based on recurrent neural
networks and on reinforcement learning depend on recurrence relations,
data-dependent conditional execution, and other features that call for dynamic
control flow. These applications benefit from the ability to make rapid
control-flow decisions across a set of computing devices in a distributed
system. For performance, scalability, and expressiveness, a machine learning
system must support dynamic control flow in distributed and heterogeneous
environments.
This paper presents a programming model for distributed machine learning that
supports dynamic control flow. We describe the design of the programming model,
and its implementation in TensorFlow, a distributed machine learning system.
Our approach extends the use of dataflow graphs to represent machine learning
models, offering several distinctive features. First, the branches of
conditionals and bodies of loops can be partitioned across many machines to run
on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs.
Second, programs written in our model support automatic differentiation and
distributed gradient computations, which are necessary for training machine
learning models that use control flow. Third, our choice of non-strict
semantics enables multiple loop iterations to execute in parallel across
machines, and to overlap compute and I/O operations.
We have done our work in the context of TensorFlow, and it has been used
extensively in research and production. We evaluate it using several real-world
applications, and demonstrate its performance and scalability.Comment: Appeared in EuroSys 2018. 14 pages, 16 figure
A markup language for text-to-speech synthesis.
Text-to-speech synthesizers must process text, and therefore
require some knowledge of text structure. While
many TTS systems allow for user control by means of
ad hoc âescape sequencesâ, there remains to date no adequate
and generally agreed upon system-independent
standard for marking up text for the purposes of synthesis.
The present paper is a collaborative effort between
two speech groups aimed at producing such a standard,
in the form of an SGML-based markup language that we
call STML â Spoken Text Markup Language. The primary
purpose of this paper is not to present STML as a
fait accompli, but rather to interest other TTS research
groups to collaborate and contribute to the development
of this standard
- âŠ