580 research outputs found
Inertial frames without the relativity principle
Ever since the work of von Ignatowsky circa 1910 it has been known (if not
always widely appreciated) that the relativity principle, combined with the
basic and fundamental physical assumptions of locality, linearity, and
isotropy, leads almost uniquely to either the Lorentz transformations of
special relativity or to Galileo's transformations of classical Newtonian
mechanics. Thus, if one wishes to entertain the possibility of Lorentz symmetry
breaking within the context of the class of local physical theories, then it
seems likely that one will have to abandon (or at the very least grossly
modify) the relativity principle. Working within the framework of local
physics, we reassess the notion of spacetime transformations between inertial
frames in the absence of the relativity principle, arguing that significant and
nontrivial physics can still be extracted as long as the transformations are at
least linear. An interesting technical aspect of the analysis is that the
transformations now form a groupoid/pseudo-group --- it is this technical point
that permits one to evade the von Ignatowsky argument. Even in the absence of a
relativity principle we can nevertheless deduce clear and compelling rules for
the transformation of space and time, rules for the composition of
3-velocities, and rules for the transformation of energy and momentum. As part
of the analysis we identify two particularly elegant and physically compelling
models implementing "minimalist" violations of Lorentz invariance --- in the
first of these minimalist models all Lorentz violations are confined to
carefully delineated particle physics sub-sectors, while the second minimalist
Lorentz-violating model depends on one free function of absolute velocity, but
otherwise preserves as much as possible of standard Lorentz invariant physics.Comment: V1: 42 pages; V2: now 43 pages; added 8 references, added brief
discussion of Carroll kinematics, added brief discussion of
Robertson-Mansouri-Sexl framework, added various minor clarifications. V3:
now 51 pages; added another 34 references; more discussion of DSR and
relative locality; various clarifications and extensions; this version
accepted for publication in JHE
A unified geometric framework for boundary charges and dressings: non-Abelian theory and matter
Boundaries in gauge theories are a delicate issue. Arbitrary boundary choices
enter the calculation of charges via Noether's second theorem, obstructing the
assignment of unambiguous physical charges to local gauge symmetries. Replacing
the arbitrary boundary choice with new degrees of freedom suggests itself. But,
concretely, such boundary degrees of freedom are spurious---i.e. they are not
part of the original field content of the theory---and have to disappear upon
gluing. How should we fit them into what we know about field-theory? We resolve
these issues in a unified and geometric manner, by introducing a connection
1-form, , in the field-space of Yang-Mills theory. Using this geometric
tool, a modified version of symplectic geometry---here called `horizontal'---is
possible. Independently of boundary conditions, this formalism bestows to each
region a physical notion of charge: the horizontal Noether charge. The
horizontal gauge charges always vanish, while global charges still arise for
reducible configurations characterized by global symmetries. The field-content
itself is used as a reference frame to distinguish `gauge' and `physical'; no
new degrees of freedom, such as group-valued edge modes, are required.
Different choices of reference fields give different 's, which are
cousins of gauge-fixing like the Higgs-unitary and Coulomb gauges. But the
formalism extends well beyond gauge-fixings, for instance by avoiding the
Gribov problem. For one choice of , would-be Goldstone modes arising
from the condensation of matter degrees of freedom play precisely the role of
the known group-valued edge modes, but here they arise as preferred coordinates
in field space, rather than new fields. For another choice, in the Abelian
case, recovers the Dirac dressing of the electron.Comment: 71 pages, 3 appendices, 9 figures. Summary of the results at the
beginning of the paper. v2: numerous improvements in the presentation, and
introduction of new references, taking colleague feedback into accoun
Zero Knowledge Protocols
In this day and age, it is commonplace to spend part of our day on the Internet. Whether to check e-mail, purchase goods, manage a bank account, or merely browse interesting sites, we rely on certain security measures to keep personal information safe from unwanted outsiders. Within the field of cryptography there are many techniques and algorithms that have provided top-notch security for our methods of communication today, yet as technology advances and as loopholes are found, we are constantly looking for novel ways to protect our information. Introduced approximately 25 years ago by Goldwasser, Micali, and Rackoff, zero knowledge protocols seek to do just that. This paper will explore these protocols, their application to NP-complete problems (problems with no efficient way of finding a solution), and their use in modern day cryptosystems
Quantum Error Correction with the Toric-GKP Code
We examine the performance of the single-mode GKP code and its concatenation
with the toric code for a noise model of Gaussian shifts, or displacement
errors. We show how one can optimize the tracking of errors in repeated noisy
error correction for the GKP code. We do this by examining the
maximum-likelihood problem for this setting and its mapping onto a 1D Euclidean
path-integral modeling a particle in a random cosine potential. We demonstrate
the efficiency of a minimum-energy decoding strategy as a proxy for the path
integral evaluation. In the second part of this paper, we analyze and
numerically assess the concatenation of the GKP code with the toric code. When
toric code measurements and GKP error correction measurements are perfect, we
find that by using GKP error information the toric code threshold improves from
to . When only the GKP error correction measurements are perfect
we observe a threshold at . In the more realistic setting when all error
information is noisy, we show how to represent the maximum likelihood decoding
problem for the toric-GKP code as a 3D compact QED model in the presence of a
quenched random gauge field, an extension of the random-plaquette gauge model
for the toric code. We present a new decoder for this problem which shows the
existence of a noise threshold at shift-error standard deviation for toric code measurements, data errors and GKP ancilla errors.
If the errors only come from having imperfect GKP states, this corresponds to
states with just 4 photons or more. Our last result is a no-go result for
linear oscillator codes, encoding oscillators into oscillators. For the
Gaussian displacement error model, we prove that encoding corresponds to
squeezing the shift errors. This shows that linear oscillator codes are useless
for quantum information protection against Gaussian shift errors.Comment: 50 pages, 14 figure
Ab initio atomistic thermodynamics and statistical mechanics of surface properties and functions
Previous and present "academic" research aiming at atomic scale understanding
is mainly concerned with the study of individual molecular processes possibly
underlying materials science applications. Appealing properties of an
individual process are then frequently discussed in terms of their direct
importance for the envisioned material function, or reciprocally, the function
of materials is somehow believed to be understandable by essentially one
prominent elementary process only. What is often overlooked in this approach is
that in macroscopic systems of technological relevance typically a large number
of distinct atomic scale processes take place. Which of them are decisive for
observable system properties and functions is then not only determined by the
detailed individual properties of each process alone, but in many, if not most
cases also the interplay of all processes, i.e. how they act together, plays a
crucial role. For a "predictive materials science modeling with microscopic
understanding", a description that treats the statistical interplay of a large
number of microscopically well-described elementary processes must therefore be
applied. Modern electronic structure theory methods such as DFT have become a
standard tool for the accurate description of individual molecular processes.
Here, we discuss the present status of emerging methodologies which attempt to
achieve a (hopefully seamless) match of DFT with concepts from statistical
mechanics or thermodynamics, in order to also address the interplay of the
various molecular processes. The new quality of, and the novel insights that
can be gained by, such techniques is illustrated by how they allow the
description of crystal surfaces in contact with realistic gas-phase
environments.Comment: 24 pages including 17 figures, related publications can be found at
http://www.fhi-berlin.mpg.de/th/paper.htm
- …