7,523 research outputs found
Bootstrap score tests for fractional integration in heteroskedastic ARFIMA models, with an application to price dynamics in commodity spot and futures markets
Empirical evidence from time series methods which assume the usual I(0)/I(1) paradigm suggests that the efficient market hypothesis, stating that spot and futures prices of a commodity should co-integrate with a unit slope on futures prices, does not hold. However, these statistical methods are known to be unreliable if the data are fractionally integrated. Moreover, spot and futures price data tend to display clear patterns of time-varying volatility which also has the potential to invalidate the use of these methods. Using new tests constructed within a more general heteroskedastic fractionally integrated model we are able to find a body of evidence in support of the efficient market hypothesis for a number of commodities. Our new tests are wild bootstrap implementations of score-based tests for the order of integration of a fractionally integrated time series. These tests are designed to be robust to both conditional and unconditional heteroskedasticity of a quite general and unknown form in the shocks. We show that the asymptotic tests do not admit pivotal asymptotic null distributions in the presence of heteroskedasticity, but that the corresponding tests based on the wild bootstrap principle do. A Monte Carlo simulation study demonstrates that very significant improvements in finite sample behaviour can be obtained by the bootstrap vis-à-vis the corresponding asymptotic tests in both heteroskedastic and homoskedastic environments
Adaptive Inference in Heteroskedastic Fractional Time Series Models
We consider estimation and inference in fractionally integrated time series models driven by shocks which can display conditional and unconditional heteroskedasticity of unknown form. Although the standard conditional sum-of-squares (CSS) estimator remains consistent and asymptotically normal in such cases, unconditional heteroskedasticity in ates its variance matrix by a scalar quantity, λ > 1, thereby inducing a loss in efficiency relative to the unconditionally homoskedastic case, λ = 1. We propose an adaptive version of the CSS estimator, based on non-parametric kernel-based estimation of the unconditional volatility process. We show that adaptive estimation eliminates the factor λ from the variance matrix, thereby delivering the same asymptotic efficiency as that attained by the standard CSS estimator in the unconditionally homoskedastic case and, hence, asymptotic efficiency under Gaussianity. Importantly, the asymptotic analysis is based on a novel proof strategy, which does not require consistent estimation (in the sup norm) of the volatility process. Consequently, we are able to work under a weaker set of assumptions than those employed in the extant literature. The asymptotic variance matrices of both the standard and adaptive CSS estimators depend on any weak parametric autocorrelation present in the fractional model and any conditional heteroskedasticity in the shocks. Consequently, asymptotically pivotal inference can be achieved through the development of confidence regions or hypothesis tests using either heteroskedasticity-robust standard errors and/or a wild bootstrap. Monte Carlo simulations and empirical applications illustrate the practical usefulness of the methods proposed
Quantum Teleportation is a Universal Computational Primitive
We present a method to create a variety of interesting gates by teleporting
quantum bits through special entangled states. This allows, for instance, the
construction of a quantum computer based on just single qubit operations, Bell
measurements, and GHZ states. We also present straightforward constructions of
a wide variety of fault-tolerant quantum gates.Comment: 6 pages, REVTeX, 6 epsf figure
Quantum Computing with Very Noisy Devices
In theory, quantum computers can efficiently simulate quantum physics, factor
large numbers and estimate integrals, thus solving otherwise intractable
computational problems. In practice, quantum computers must operate with noisy
devices called ``gates'' that tend to destroy the fragile quantum states needed
for computation. The goal of fault-tolerant quantum computing is to compute
accurately even when gates have a high probability of error each time they are
used. Here we give evidence that accurate quantum computing is possible with
error probabilities above 3% per gate, which is significantly higher than what
was previously thought possible. However, the resources required for computing
at such high error probabilities are excessive. Fortunately, they decrease
rapidly with decreasing error probabilities. If we had quantum resources
comparable to the considerable resources available in today's digital
computers, we could implement non-trivial quantum computations at error
probabilities as high as 1% per gate.Comment: 47 page
From slow to fast faulting: recent challenges in earthquake fault mechanics
Faults—thin zones of highly localized shear deformation in the Earth—accommodate strain on a momentous range of dimensions (millimetres to hundreds of kilometres for major plate boundaries) and of time intervals (from fractions of seconds during earthquake slip, to years of slow, aseismic slip and millions of years of intermittent activity). Traditionally, brittle faults have been distinguished from shear zones which deform by crystal plasticity (e.g. mylonites). However such brittle/plastic distinction becomes blurred when considering (i) deep earthquakes that happen under conditions of pressure and temperature where minerals are clearly in the plastic deformation regime (a clue for seismologists over several decades) and (ii) the extreme dynamic stress drop occurring during seismic slip acceleration on faults, requiring efficient weakening mechanisms. High strain rates (more than 104 s−1) are accommodated within paper-thin layers (principal slip zone), where co-seismic frictional heating triggers non-brittle weakening mechanisms. In addition, (iii) pervasive off-fault damage is observed, introducing energy sinks which are not accounted for by traditional frictional models. These observations challenge our traditional understanding of friction (rate-and-state laws), anelastic deformation (creep and flow of crystalline materials) and the scientific consensus on fault operation. This article is part of the themed issue ‘Faulting, friction and weakening: from slow to fast motion’
Effective String Theory and Nonlinear Lorentz Invariance
We study the low-energy effective action governing the transverse
fluctuations of a long string, such as a confining flux tube in QCD. We work in
the static gauge where this action contains only the transverse excitations of
the string. The static gauge action is strongly constrained by the requirement
that the Lorentz symmetry, that is spontaneously broken by the long string
vacuum, is nonlinearly realized on the Nambu-Goldstone bosons. One solution to
the constraints (at the classical level) is the Nambu-Goto action, and the
general solution contains higher derivative corrections to this. We show that
in 2+1 dimensions, the first allowed correction to the Nambu-Goto action is
proportional to the squared curvature of the induced metric on the worldsheet.
In higher dimensions, there is a more complicated allowed correction that
appears at lower order than the curvature squared. We argue that this leading
correction is similar to, but not identical to, the one-loop determinant
(\sqrt{-h} R \Box^{-1} R) computed by Polyakov for the bosonic fundamental
string.Comment: 15 page
Diagonal deformations of thin center vortices and their stability in Yang-Mills theories
The importance of center vortices for the understanding of the confining
properties of SU(N) Yang-Mills theories is well established in the lattice.
However, in the continuum, there is a problem concerning the relevance of
center vortex backgrounds. They display the so called Savvidy-Nielsen-Olesen
instability, associated with a gyromagnetic ratio for the
off-diagonal gluons.
In this work, we initially consider the usual definition of a {\it thin}
center vortex and rewrite it in terms of a local color frame in SU(N)
Yang-Mills theories. Then, we define a thick center vortex as a diagonal
deformation of the thin object. Besides the usual thick background profile,
this deformation also contains a frame defect coupled with gyromagnetic ratio
, originated from the charged sector. As a consequence, the
analysis of stability is modified. In particular, we point out that the defect
should stabilize a vortex configuration formed by a pair of straight components
separated by an appropriate finite distance.Comment: 20 pages, LaTe
Physical realization of coupled Hilbert-space mirrors for quantum-state engineering
Manipulation of superpositions of discrete quantum states has a mathematical
counterpart in the motion of a unit-length statevector in an N-dimensional
Hilbert space. Any such statevector motion can be regarded as a succession of
two-dimensional rotations. But the desired statevector change can also be
treated as a succession of reflections, the generalization of Householder
transformations. In multidimensional Hilbert space such reflection sequences
offer more efficient procedures for statevector manipulation than do sequences
of rotations. We here show how such reflections can be designed for a system
with two degenerate levels - a generalization of the traditional two-state atom
- that allows the construction of propagators for angular momentum states. We
use the Morris-Shore transformation to express the propagator in terms of
Morris-Shore basis states and Cayley-Klein parameters, which allows us to
connect properties of laser pulses to Hilbert-space motion. Under suitable
conditions on the couplings and the common detuning, the propagators within
each set of degenerate states represent products of generalized Householder
reflections, with orthogonal vectors. We propose physical realizations of this
novel geometrical object with resonant, near-resonant and far-off-resonant
laser pulses. We give several examples of implementations in real atoms or
molecules.Comment: 15 pages, 6 figure
AI Researchers, Video Games Are Your Friends!
If you are an artificial intelligence researcher, you should look to video
games as ideal testbeds for the work you do. If you are a video game developer,
you should look to AI for the technology that makes completely new types of
games possible. This chapter lays out the case for both of these propositions.
It asks the question "what can video games do for AI", and discusses how in
particular general video game playing is the ideal testbed for artificial
general intelligence research. It then asks the question "what can AI do for
video games", and lays out a vision for what video games might look like if we
had significantly more advanced AI at our disposal. The chapter is based on my
keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad
audience.Comment: in Studies in Computational Intelligence Studies in Computational
Intelligence, Volume 669 2017. Springe
Delegating Quantum Computation in the Quantum Random Oracle Model
A delegation scheme allows a computationally weak client to use a server's
resources to help it evaluate a complex circuit without leaking any information
about the input (other than its length) to the server. In this paper, we
consider delegation schemes for quantum circuits, where we try to minimize the
quantum operations needed by the client. We construct a new scheme for
delegating a large circuit family, which we call "C+P circuits". "C+P" circuits
are the circuits composed of Toffoli gates and diagonal gates. Our scheme is
non-interactive, requires very little quantum computation from the client
(proportional to input length but independent of the circuit size), and can be
proved secure in the quantum random oracle model, without relying on additional
assumptions, such as the existence of fully homomorphic encryption. In practice
the random oracle can be replaced by an appropriate hash function or block
cipher, for example, SHA-3, AES.
This protocol allows a client to delegate the most expensive part of some
quantum algorithms, for example, Shor's algorithm. The previous protocols that
are powerful enough to delegate Shor's algorithm require either many rounds of
interactions or the existence of FHE. The protocol requires asymptotically
fewer quantum gates on the client side compared to running Shor's algorithm
locally.
To hide the inputs, our scheme uses an encoding that maps one input qubit to
multiple qubits. We then provide a novel generalization of classical garbled
circuits ("reversible garbled circuits") to allow the computation of Toffoli
circuits on this encoding. We also give a technique that can support the
computation of phase gates on this encoding.
To prove the security of this protocol, we study key dependent message(KDM)
security in the quantum random oracle model. KDM security was not previously
studied in quantum settings.Comment: 41 pages, 1 figures. Update to be consistent with the proceeding
versio
- …