1,614,014 research outputs found
Consistent Modeling of Velocity Statistics and Redshift-Space Distortions in One-Loop Perturbation Theory
The peculiar velocities of biased tracers of the cosmic density field contain
important information about the growth of large scale structure and generate
anisotropy in the observed clustering of galaxies. Using N-body data, we show
that velocity expansions for halo redshift-space power spectra are converged at
the percent-level at perturbative scales for most line-of-sight angles
when the first three pairwise velocity moments are included, and that the third
moment is well-approximated by a counterterm-like contribution. We compute
these pairwise-velocity statistics in Fourier space using both Eulerian and
Lagrangian one-loop perturbation theory using a cubic bias scheme and a
complete set of counterterms and stochastic contributions. We compare the
models and show that our models fit both real-space velocity statistics and
redshift-space power spectra for both halos and a mock sample of galaxies at
sub-percent level on perturbative scales using consistent sets of parameters,
making them appealing choices for the upcoming era of spectroscopic,
peculiar-velocity and kSZ surveys.Comment: 63 pages, 11 figures, updated to match version accepted by JCA
Higher-order non-symmetric counterterms in pure Yang-Mills theory
We analyze the restoration of the Slavnov-Taylor (ST) identities for pure
massless Yang-Mills theory in the Landau gauge within the BPHZL renormalization
scheme with IR regulator. We obtain the most general form of the action-like
part of the symmetric regularized action, obeying the relevant ST identities
and all other relevant symmetries of the model, to all orders in the loop
expansion. We also give a cohomological characterization of the fulfillment of
BPHZL IR power-counting criterion, guaranteeing the existence of the limit
where the IR regulator goes to zero. The technique analyzed in this paper is
needed in the study of the restoration of the ST identities for those models,
like the MSSM, where massless particles are present and no invariant
regularization scheme is known to preserve the full set of ST identities of the
theory.Comment: Final version published in the journa
Recommended from our members
An evolutionary developmental approach to cultural evolution
Evolutionary developmental theories in biology see the processes and organization of organisms as crucial for understanding the dynamic behavior of organic evolution. Darwinian forces are seen as necessary but not sufficient for explaining observed evolutionary patterns. We here propose that the same arguments apply with even greater force to culture vis-Ă -vis cultural evolution. In order not to argue entirely in the abstract, we demonstrate the proposed approach by combining a set of different models into a provisional synthetic theory, and by applying this theory to a number of short case studies. What emerges is a set of concepts and models that allow us to consider entirely new types of explanations for the evolution of cultures. For example we see how feedback relations - both within societies and between societies and their ecological environment - have the power to shape evolutionary history in profound ways. The ambition here is not to produce a definite statement on what such a theory should look like but rather to propose a starting point along with an argumentation and demonstration of its potential
Lazy Model Expansion: Interleaving Grounding with Search
Finding satisfying assignments for the variables involved in a set of
constraints can be cast as a (bounded) model generation problem: search for
(bounded) models of a theory in some logic. The state-of-the-art approach for
bounded model generation for rich knowledge representation languages, like ASP,
FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or
propositional one and apply a search algorithm to the resulting theory.
An important bottleneck is the blowup of the size of the theory caused by the
reduction phase. Lazily grounding the theory during search is a way to overcome
this bottleneck. We present a theoretical framework and an implementation in
the context of the FO(.) knowledge representation language. Instead of
grounding all parts of a theory, justifications are derived for some parts of
it. Given a partial assignment for the grounded part of the theory and valid
justifications for the formulas of the non-grounded part, the justifications
provide a recipe to construct a complete assignment that satisfies the
non-grounded part. When a justification for a particular formula becomes
invalid during search, a new one is derived; if that fails, the formula is
split in a part to be grounded and a part that can be justified.
The theoretical framework captures existing approaches for tackling the
grounding bottleneck such as lazy clause generation and grounding-on-the-fly,
and presents a generalization of the 2-watched literal scheme. We present an
algorithm for lazy model expansion and integrate it in a model generator for
FO(ID), a language extending first-order logic with inductive definitions. The
algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base
System IDP. Experimental results illustrate the power and generality of the
approach
A synthetic axiomatization of Map Theory
Includes TOC détaillée, index et appendicesInternational audienceThis paper presents a subtantially simplified axiomatization of Map Theory and proves the consistency of this axiomatization in ZFC under the assumption that there exists an inaccessible ordinal. Map Theory axiomatizes lambda calculus plus Hilbert's epsilon operator. All theorems of ZFC set theory including the axiom of foundation are provable in Map Theory, and if one omits Hilbert's epsilon operator from Map Theory then one is left with a computer programming language. Map Theory fulfills Church's original aim of introducing lambda calculus. Map Theory is suited for reasoning about classical mathematics as well ascomputer programs. Furthermore, Map Theory is suited for eliminating thebarrier between classical mathematics and computer science rather than just supporting the two fields side by side. Map Theory axiomatizes a universe of "maps", some of which are "wellfounded". The class of wellfounded maps in Map Theory corresponds to the universe of sets in ZFC. The first version MT0 of Map Theory had axioms which populated the class of wellfounded maps, much like the power set axiom et.al. populates the universe of ZFC. The new axiomatization MT of Map Theory is "synthetic" in the sense that the class of wellfounded maps is defined inside MapTheory rather than being introduced through axioms. In the paper we define the notion of kappa- and kappasigma-expansions and prove that if sigma is the smallest strongly inaccessible cardinal then canonical kappasigma expansions are models of MT (which proves the consistency). Furthermore, in the appendix, we prove that canonical omega-expansions are fully abstract models of the computational part of Map Theory
Disentangling the - Duality
Motivated by UV realisations of Starobinsky-like inflation models, we study
generic exponential plateau-like potentials to understand whether an exact
-formulation may still be obtained when the asymptotic shift-symmetry of
the potential is broken for larger field values. Potentials which break the
shift symmetry with rising exponentials at large field values only allow for
corresponding -descriptions with a leading order term with
, regardless of whether the duality is exact or approximate. The
-term survives as part of a series expansion of the function and
thus cannot maintain a plateau for all field values. We further find a lean and
instructive way to obtain a function describing -inflation
which breaks the shift symmetry with a monomial, and corresponds to effectively
logarithmic corrections to an model. These examples emphasise that
higher order terms in -theory may not be neglected if they are present at
all. Additionally, we relate the function corresponding to chaotic
inflation to a more general Jordan frame set-up. In addition, we consider
-duals of two given UV examples, both from supergravity and string
theory. Finally, we outline the CMB phenomenology of these models which show
effects of power suppression at low-.Comment: 30 pages, 2 figures; v2: added refs, 1 figure, and minor
clarifications; to appear in JCA
Algorithmically Efficient Syntactic Characterization of Possibility Domains
We call domain any arbitrary subset of a Cartesian power of the set {0,1} when we think of it as reflecting abstract rationality restrictions on vectors of two-valued judgments on a number of issues. In Computational Social Choice Theory, and in particular in the theory of judgment aggregation, a domain is called a possibility domain if it admits a non-dictatorial aggregator, i.e. if for some k there exists a unanimous (idempotent) function F:D^k - > D which is not a projection function. We prove that a domain is a possibility domain if and only if there is a propositional formula of a certain syntactic form, sometimes called an integrity constraint, whose set of satisfying truth assignments, or models, comprise the domain. We call possibility integrity constraints the formulas of the specific syntactic type we define. Given a possibility domain D, we show how to construct a possibility integrity constraint for D efficiently, i.e, in polynomial time in the size of the domain. We also show how to distinguish formulas that are possibility integrity constraints in linear time in the size of the input formula. Finally, we prove the analogous results for local possibility domains, i.e. domains that admit an aggregator which is not a projection function, even when restricted to any given issue. Our result falls in the realm of classical results that give syntactic characterizations of logical relations that have certain closure properties, like e.g. the result that logical relations component-wise closed under logical AND are precisely the models of Horn formulas. However, our techniques draw from results in judgment aggregation theory as well from results about propositional formulas and logical relations
Shape Grammars for Architectural Heritage
Shape grammars have been introduced in architectural theory some decades ago. They have been applied to architectural construction methods (e.g. Chinese traditional wooden buildings) or for analyzing the design patterns of well-known architects (e.g. Palladio, Frank Lloyd-Wright).These examples demonstrated that complex geometrical shapes could be generated by a set of replacement rules out of a start symbol, usually a simple geometric shape. With the advent of powerful tools like the CityEngine an interesting field for practical applications of these grammars arose opening a whole range of new possibilities for architectural heritage.On the one hand, a description of ancient building principles in the formalized way of a shape grammar can aid the understanding and preservation of cultural heritage. With the possibility to actually construct digital 3D models out of shape grammars, they became even more interesting. Furthermore, this approach allows for a large scale creation of 3D models of entire settlements and cities.On the other hand, shape grammars allow for structured approaches to virtual 3D reconstruction as has been demonstrated for e.g. Mayan or Roman architecture. Besides that, the possibility to specify parameterized variations of the models proves to be an extremely helpful feature.In this paper we reconsider shape grammars in architecture and examine influences onto procedural modelling. Then we argue for state-of-the-art tools like the CityEngine that apply shape grammars and procedural modelling in architectural contexts and exemplify their power and potential by reconstructing traditional Balinese settlements
- …