3,251 research outputs found
Constraining dark energy
In this paper we propose a mechanism that protects theories violating a
holographic bound suggested in arXiv:1203.5476 from developing accelerated
expansion. The mechanism builts on work on transplanckian physics, and a
non-trivial choice of vacuum states. If correct, it lends further support for
detectable signatures in the CMBR signalling new physics.Comment: 8 pages. arXiv admin note: text overlap with arXiv:astro-ph/0606474.
Minor misprints correcte
On Thermalization in de Sitter Space
We discuss thermalization in de Sitter space and argue, from two different
points of view, that the typical time needed for thermalization is of order
, where is the radius of the de Sitter space in question.
This time scale gives plenty of room for non-thermal deviations to survive
during long periods of inflation. We also speculate in more general terms on
the meaning of the time scale for finite quantum systems inside isolated boxes,
and comment on the relation to the Poincar\'{e} recurrence time.Comment: 14 pages, 2 figures, latex, references added. Improved discussion in
section 3 adde
Using a bootstrap method to choose the sample fraction in tail index estimation
Tail index estimation depends for its accuracy on a precise choice of the sample fraction, i.e. the number of extreme order statistics on which the estimation is based. A complete solution to the sample fraction selection is given by means of a two step subsample bootstrap method. This method adaptively determines the sample fraction that minimizes the asymptotic mean squared error. Unlike previous methods, prior knowledge of the second order parameter is not required. In addition, we are able to dispense with the need for a prior estimate of the tail index which already converges roughly at the optimal rate. The only arbitrary choice of parameters is the number of Monte Carlo replications.tail index;bias;bootstrap;mean squared error;optimal extreme sample fraction
Transplanckian energy production and slow roll inflation
In this paper we investigate how the energy density due to a non-standard
choice of initial vacuum affects the expansion of the universe during
inflation. To do this we introduce source terms in the Friedmann equations
making sure that we respect the relation between gravity and thermodynamics. We
find that the energy production automatically implies a slow rolling
cosmological constant. Hence we also conclude that there is no well defined
value for the cosmological constant in the presence of sources. We speculate
that a non-standard vacuum can provide slow roll inflation on its own.Comment: 16 pages, 2 figures, version 2: minor corrections to section 4 and
references adde
Holographic Superconductors with Lifshitz Scaling
Black holes in asymptotically Lifshitz spacetime provide a window onto finite
temperature effects in strongly coupled Lifshitz models. We add a Maxwell gauge
field and charged matter to a recently proposed gravity dual of 2+1 dimensional
Lifshitz theory. This gives rise to charged black holes with scalar hair, which
correspond to the superconducting phase of holographic superconductors with z >
1 Lifshitz scaling. Along the way we analyze the global geometry of static,
asymptotically Lifshitz black holes at arbitrary critical exponent z > 1. In
all known exact solutions there is a null curvature singularity in the black
hole region, and, by a general argument, the same applies to generic Lifshitz
black holes.Comment: 23 pages, 4 figures; v2: added references; v3: matches published
versio
Practical dependent type checking using twin types
People writing proofs or programs in dependently typed languages can omit some function arguments in order to decrease the code size and improve readability. Type checking such a program involves filling in each of these implicit arguments in a type-correct way. This is typically done using some form of unification.One approach to unification, taken by Agda, involves sometimes starting to unify terms before their types are known to be equal: in some cases one can make progress on unifying the terms, and then use information gleaned in this way to unify the types. This flexibility allows Agda to solve implicit arguments that are not found by several other systems. However, Agda\u27s implementation is buggy: sometimes the solutions chosen are ill-typed, which can cause the type checker to crash.With Gundry and McBride\u27s twin variable technique one can also start to unify terms before their types are known to be equal, and furthermore this technique is accompanied by correctness proofs. However, so far this technique has not been tested in practice as part of a full type checker.We have reformulated Gundry and McBride\u27s technique without twin variables, using only twin types, with the aim of making the technique easier to implement in existing type checkers (in particular Agda). We have also introduced a type-agnostic syntactic equality rule that seems to be useful in practice. The reformulated technique has been tested in a type checker for a tiny variant of Agda. This type checker handles at least one example that Coq, Idris, Lean and Matita cannot handle, and does so in time and space comparable to that used by Agda. This suggests that the reformulated technique is usable in practice
- …