164 research outputs found
Apparent horizons in D-dimensional Robinson-Trautman spacetime
We derive the higher dimensional generalization of Penrose-Tod equation
describing apparent horizons in Robinson-Trautman spacetimes. New results
concerning the existence and uniqueness of its solutions in four dimensions are
proven. Namely, previous results of Tod are generalized to nonvanishing
cosmological constant.Comment: 4 pages, 1 figure, to appear in ERE 2008 conference proceedings, to
be published by AI
The wave equation on axisymmetric stationary black hole backgrounds
Understanding the behaviour of linear waves on black hole backgrounds is a
central problem in general relativity, intimately connected with the nonlinear
stability of the black hole spacetimes themselves as solutions to the Einstein
equations--a major open question in the subject. Nonetheless, it is only very
recently that even the most basic boundedness and quantitative decay properties
of linear waves have been proven in a suitably general class of black hole
exterior spacetimes. This talk will review our current mathematical
understanding of waves on black hole backgrounds, beginning with the classical
boundedness theorem of Kay and Wald on exactly Schwarzschild exteriors and
ending with very recent boundedness and decay theorems (proven in collaboration
with Igor Rodnianski) on a wider class of spacetimes. This class of spacetimes
includes in particular slowly rotating Kerr spacetimes, but in the case of the
boundedness theorem is in fact much larger, encompassing general axisymmetric
stationary spacetimes whose geometry is sufficiently close to Schwarzschild and
whose Killing fields span the null generator of the horizon.Comment: 20 pages, 6 pages, to appear in the proceedings of the Spanish
Relativity Meeting, Salamanca 200
Accelerated expansion through interaction
Interactions between dark matter and dark energy with a given equation of
state are known to modify the cosmic dynamics. On the other hand, the strength
of these interactions is subject to strong observational constraints. Here we
discuss a model in which the transition from decelerated to accelerated
expansion of the Universe arises as a pure interaction phenomenon. Various
cosmological scenarios that describe a present stage of accelerated expansion,
like the LCDM model or a (generalized) Chaplygin gas, follow as special cases
for different interaction rates. This unifying view on the homogeneous and
isotropic background level is accompanied by a non-adiabatic perturbation
dynamics which can be seen as a consequence of a fluctuating interaction rate.Comment: 4 pages, to appear in the Proceedings of the Spanish Relativity
Meeting ERE2008 in Salamanca, September 200
The weak call-by-value λ-calculus is reasonable for both time and space
We study the weak call-by-value -calculus as a model for computational complexity theory and establish the
natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term
in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas
from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value
-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in
space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard
complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not
cover sublinear time or space.
Note that our measures still have the well-known size explosion property, where the space measure of
a computation can be exponentially bigger than its time measure. However, our result implies that this
exponential gap disappears once complexity classes are considered instead of concrete computations.
We consider this result a first step towards a solution for the long-standing open problem of whether the
natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value
-calculus is the first proof of reasonability (including both time and space) for a functional language based on
natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity
classes, both on paper and in proof assistants.
The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value
-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive
one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular,
all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the
overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing,
similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in
time, but the space consumption might require an additional factor of log, which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the construction and verification of a
space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and
a polynomial overhead in time
The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space
We study the weak call-by-value -calculus as a model for
computational complexity theory and establish the natural measures for time and
space -- the number of beta-reductions and the size of the largest term in a
computation -- as reasonable measures with respect to the invariance thesis of
Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those
measures, Turing machines and the weak call-by-value -calculus can
simulate each other within a polynomial overhead in time and a constant factor
overhead in space for all computations that terminate in (encodings) of 'true'
or 'false'. We consider this result as a solution to the long-standing open
problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural
measures for time and space of the -calculus are reasonable, at least
in case of weak call-by-value evaluation.
Our proof relies on a hybrid of two simulation strategies of reductions in
the weak call-by-value -calculus by Turing machines, both of which are
insufficient if taken alone. The first strategy is the most naive one in the
sense that a reduction sequence is simulated precisely as given by the
reduction rules; in particular, all substitutions are executed immediately.
This simulation runs within a constant overhead in space, but the overhead in
time might be exponential. The second strategy is heap-based and relies on
structure sharing, similar to existing compilers of eager functional languages.
This strategy only has a polynomial overhead in time, but the space consumption
might require an additional factor of , which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the
construction and verification of a space-aware interleaving of the two
strategies, which is shown to yield both a constant overhead in space and a
polynomial overhead in time
Frequency-domain controller design by linear programming
In this thesis, a new framework to design controllers in the frequency domain is proposed. The method is based on the shaping of the open-loop transfer function in the Nyquist diagram. A line representing a lower approximation for the crossover frequency and a line representing a new linear robustness margin guaranteeing lower bounds for the classical robustness margins are defined and used as constraints. A linear programming approach is proposed to tune fixed-order linearly parameterized controllers for stable single-input single-output linear time-invariant plants. Two optimization problems are proposed and solved by linear programming. In the first one, the new robustness margin is maximized given a lower approximation of the crossover frequency, whereas in the second one, the closed-loop performance in terms of load disturbance rejection, output disturbance rejection and tracking is maximized subject to constraints on the new robustness margin. The method can directly consider multi-model systems. Moreover, this new framework can be used directly with frequency-domain data. Thus, it can also consider systems with frequency-domain uncertainties. Using the same framework, an extension of the method is proposed to tune fixed-order linearly parameterized gain-scheduled controllers for stable single-input single-output linear parameter varying plants. This method directly computes a linear parameter varying controller from a linear parameter varying model or from a set of frequency-domain data in different operating points and no interpolation is needed. In terms of closed-loop performance, this approach leads to extremely good results. However, the global stability cannot be guaranteed for fast parameter variations and should be analyzed a posteriori. Nevertheless, for certain classes of switched systems and linear parameter varying systems, it is also possible to guarantee the stability within the design framework. This can be accomplished by adding constraints based on the phase difference of the characteristic polynomials of the closed-loop systems. This frequency-domain methodology has been tested on numerous simulation examples and implemented experimentally on a high-precision double-axis positioning system. The results show the effectiveness and simplicity of the proposed methodologies
The Interaction Rate in Holographic Models of Dark Energy
Observational data from supernovae type Ia, baryon acoustic oscillations, gas
mass fraction in galaxy clusters, and the growth factor are used to reconstruct
the the interaction rate of the holographic dark energy model recently proposed
by Zimdahl and Pav\'{o}n [1] in the redshift interval . It shows a
reasonable behavior as it increases with expansion from a small or vanishing
value in the far past and begins decreasing at recent times. This suggests that
the equation of state parameter of dark energy does not cross the phantom
divide line.Comment: 8 pages, 2 figures. Key words: cosmology, holography, late
accelerated expansion, dark energy. To appear in the Proceedings of the
Spanish Relativity Meeting held in Salamanca (Spain) in September 2008. Uses
AIP styl
On the dark energy rest frame and the CMB
Dark energy is usually parametrized as a perfect fluid with negative pressure
and a certain equation of state. Besides, it is supposed to interact very
weakly with the rest of the components of the universe and, as a consequence,
there is no reason to expect it to have the same large-scale rest frame as
matter and radiation. Thus, apart from its equation of state and its energy
density one should also consider its velocity as a free parameter
to be determined by observations. This velocity defines a cosmological
preferred frame, so the universe becomes anisotropic and, therefore, the CMB
temperature fluctuations will be affected, modifying mainly the dipole and the
quadrupole.Comment: 4 pages. Contribution to the proceedings of Spanish Relativity
Meeting 2008, Salamanca, Spain, 15-19 September 200
Exact -cosmological model coming from the request of the existence of a Noether symmetry
We present an -cosmological model with an exact analytic solution,
coming from the request of the existence of a Noether symmetry, which is able
to describe a dust-dominated decelerated phase before the current accelerated
phase of the universe.Comment: 4 pages, 2 figures, Contribution to the proceedings of Spanish
Relativity Meeting 2008, Salamanca, Sapin, 15-19 September 200
Application of initial data sequences to the study of Black Hole dynamical trapping horizons
Non-continuous "jumps" of Apparent Horizons occur generically in 3+1 (binary)
black hole evolutions. The dynamical trapping horizon framework suggests a
spacetime picture in which these "Apparent Horizon jumps" are understood as
spatial cuts of a single spacetime hypersurface foliated by (compact)
marginally outer trapped surfaces. We present here some work in progress which
makes use of uni-parametric sequences of (axisymmetric) binary black hole
initial data for exploring the plausibility of this spacetime picture. The
modelling of Einstein evolutions by sequences of initial data has proved to be
a successful methodological tool in other settings for the understanding of
certain qualitative features of evolutions in restricted physical regimes.Comment: Contribution to the proceedings volume of the Spanish Relativity
Meeting 2008: Physics and Mathematics of Gravitation, Salamanca, Spain, 15-19
Sep 200
- …