358 research outputs found

    Multiscale approach to inhomogeneous cosmologies

    Full text link
    The backreaction of inhomogeneities on the global expansion history of the Universe suggests a possible link of the formation of structures to the recent accelerated expansion. In this paper, the origin of this conjecture is illustrated and a model without Dark Energy that allows for a more explicit investigation of this link is discussed. Additionally to this conceptually interesting feature, the model leads to a LCDM-like distance-redshift relation that is consistent with SN data.Comment: 5 pages, 4 figures, contributed talk at the Workshop: New Directions in Modern Cosmology, Leiden, The Netherlands, 27.9.-1.10. (2010

    Relativistic Lagrangian displacement field and tensor perturbations

    Get PDF
    We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully non-perturbative approach based on the ADM split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the non-perturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious non-propagating tensor artefacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the non-linear frame-dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ\LambdaCDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.Comment: 19 pages, two figures, improved discussion, matches published versio

    Multiscale cosmology and structure-emerging Dark Energy: A plausibility analysis

    Full text link
    Cosmological backreaction suggests a link between structure formation and the expansion history of the Universe. In order to quantitatively examine this connection, we dynamically investigate a volume partition of the Universe into over-- and underdense regions. This allows us to trace structure formation using the volume fraction of the overdense regions \lambda_{\CM} as its characterizing parameter. Employing results from cosmological perturbation theory and extrapolating the leading mode into the nonlinear regime, we construct a three--parameter model for the effective cosmic expansion history, involving \lambda_{\CM_{0}}, the matter density \Omega_{m}^{\CD_{0}}, and the Hubble rate H_{\CD_{0}} of today's Universe. Taking standard values for \Omega_{m}^{\CD_{0}} and H_{\CD_{0}} as well as a reasonable value for \lambda_{\CM_{0}}, that we derive from NN--body simulations, we determine the corresponding amounts of backreaction and spatial curvature. We find that the obtained values that are sufficient to generate today's structure also lead to a Λ\LambdaCDM--like behavior of the scale factor, parametrized by the same parameters \Omega_{m}^{\CD_{0}} and H_{\CD_{0}}, but without a cosmological constant. However, the temporal behavior of \lambda_{\CM} does not faithfully reproduce the structure formation history. Surprisingly, however, the model matches with structure formation with the assumption of a low matter content, \Omega_{m}^{\CD_{0}}\approx3\%, a result that hints to a different interpretation of part of the backreaction effect as kinematical Dark Matter. (truncated)Comment: 25 pages, 10 figures, includes calculation of luminosity distances, matches published version in Phys. Rev.

    Inhomogeneity-induced variance of cosmological parameters

    Full text link
    Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters to measure the growth factor. [abbreviated]Comment: 12 pages, 10 figures, references added, estimate of lightcone effects added, matches version published in A&

    Lagrangian theory of structure formation in relativistic cosmology III: gravitoelectric perturbation and solution schemes at any order

    Get PDF
    The relativistic generalization of the Newtonian Lagrangian perturbation theory is investigated. In previous works, the first-order trace solutions that are generated by the spatially projected gravitoelectric part of the Weyl tensor were given together with extensions and applications for accessing the nonperturbative regime. We furnish here construction rules to obtain from Newtonian solutions the gravitoelectric class of relativistic solutions, for which we give the complete perturbation and solution schemes at any order of the perturbations. By construction, these schemes generalize the complete hierarchy of solutions of the Newtonian Lagrangian perturbation theory.Comment: 17 pages, a few minor extensions to match the published version in PR

    RascalC: A Jackknife Approach to Estimating Single and Multi-Tracer Galaxy Covariance Matrices

    Full text link
    To make use of clustering statistics from large cosmological surveys, accurate and precise covariance matrices are needed. We present a new code to estimate large scale galaxy two-point correlation function (2PCF) covariances in arbitrary survey geometries that, due to new sampling techniques, runs 104\sim 10^4 times faster than previous codes, computing finely-binned covariance matrices with negligible noise in less than 100 CPU-hours. As in previous works, non-Gaussianity is approximated via a small rescaling of shot-noise in the theoretical model, calibrated by comparing jackknife survey covariances to an associated jackknife model. The flexible code, RascalC, has been publicly released, and automatically takes care of all necessary pre- and post-processing, requiring only a single input dataset (without a prior 2PCF model). Deviations between large scale model covariances from a mock survey and those from a large suite of mocks are found to be be indistinguishable from noise. In addition, the choice of input mock are shown to be irrelevant for desired noise levels below 105\sim 10^5 mocks. Coupled with its generalization to multi-tracer data-sets, this shows the algorithm to be an excellent tool for analysis, reducing the need for large numbers of mock simulations to be computed.Comment: 29 pages, 8 figures. Accepted by MNRAS. Code is available at http://github.com/oliverphilcox/RascalC with documentation at http://rascalc.readthedocs.io

    Measuring the Correctness of Double-Keying: Error Classification and Quality Control in a Large Corpus of TEI-Annotated Historical Text

    Get PDF
    Among mass digitization methods, double-keying is considered to be the one with the lowest error rate. This method requires two independent transcriptions of a text by two different operators. It is particularly well suited to historical texts, which often exhibit deficiencies like poor master copies or other difficulties such as spelling variation or complex text structures. Providers of data entry services using the double-keying method generally advertise very high accuracy rates (around 99.95% to 99.98%). These advertised percentages are generally estimated on the basis of small samples, and little if anything is said about either the actual amount of text or the text genres which have been proofread, about error types, proofreaders, etc. In order to obtain significant data on this problem it is necessary to analyze a large amount of text representing a balanced sample of different text types, to distinguish the structural XML/TEI level from the typographical level, and to differentiate between various types of errors which may originate from different sources and may not be equally severe. This paper presents an extensive and complex approach to the analysis and correction of double-keying errors which has been applied by the DFG-funded project "Deutsches Textarchiv" (German Text Archive, hereafter DTA) in order to evaluate and preferably to increase the transcription and annotation accuracy of double-keyed DTA texts. Statistical analyses of the results gained from proofreading a large quantity of text are presented, which verify the common accuracy rates for the double-keying method

    Tissue Sodium Content and Arterial Hypertension in Obese Adolescents

    Get PDF
    Early-onset obesity is known to culminate in type 2 diabetes, arterial hypertension and subsequent cardiovascular disease. The role of sodium (Na+) homeostasis in this process is incompletely understood, yet correlations between Na+ accumulation and hypertension have been observed in adults. We aimed to investigate these associations in adolescents. A cohort of 32 adolescents (13-17 years), comprising 20 obese patients, of whom 11 were hypertensive, as well as 12 age-matched controls, underwent 23Na-MRI of the left lower leg with a standard clinical 3T scanner. Median triceps surae muscle Na+ content in hypertensive obese (11.95 mmol/L [interquartile range 11.62-13.66]) was significantly lower than in normotensive obese (13.63 mmol/L [12.97-17.64]; p = 0.043) or controls (15.37 mmol/L [14.12-16.08]; p = 0.012). No significant differences were found between normotensive obese and controls. Skin Na+ content in hypertensive obese (13.33 mmol/L [11.53-14.22] did not differ to normotensive obese (14.12 mmol/L [13.15-15.83]) or controls (11.48 mmol/L [10.48-12.80]), whereas normotensive obese had higher values compared to controls (p = 0.004). Arterial hypertension in obese adolescents is associated with low muscle Na+ content. These findings suggest an early dysregulation of Na+ homeostasis in cardiometabolic disease. Further research is needed to determine whether this association is causal and how it evolves in the transition to adulthood

    The DTA “Base Format”: A TEI Subset for the Compilation of a Large Reference Corpus of Printed Text from Multiple Sources

    Get PDF
    In this article we describe the DTA “Base Format” (DTABf), a strict subset of the TEI P5 tag set. The purpose of the DTABf is to provide a balance between expressiveness and precision as well as an interoperable annotation scheme for a large variety of text types of historical corpora of printed text from multiple sources. The DTABf has been developed on the basis of a large amount of historical text data in the core corpus of the project Deutsches Textarchiv (DTA) and text collections from 15 cooperating projects with a current total of 210 million tokens. The DTABf is a “living” TEI format which is continuously adjusted when new text candidates for the DTA containing new structural phenomena are encountered. We also focus on other aspects of the DTABf including consistency, interoperability with other TEI dialects, HTML and other presentations of the TEI texts, and conversion into other formats, as well as linguistic analysis. We include some examples of best practices to illustrate how external corpora can be losslessly converted into the DTABf, thus enabling third parties to use the DTABf in their specific projects. The DTABf is comprehensively documented, and several software tools are available for working with it, making it a widely used format for the encoding of historical printed German text

    Adaptive Conversational Agents: Exploring the Effect of Individualized Design on User Experience

    Get PDF
    Conversational agents (CA) offer a range of benefits to firms and users, yet user experiences are often unsatisfying. An explanation might be that individual differences of users are only insufficiently addressed in today’s CA design. Drawing on communication accommodation theory, we develop a research model and study design to investigate how adapting CA design to users’ individual characteristics influences the user experience. In particular, we develop text-based CAs (i.e., chatbots) that are adapted to users’ rational/intuitive cognitive style or need for interaction, and compare the user experience to non-adapted CAs. Initial results from our pilot study (n=37) confirm that individualized CA design can enhance the user experience. We expect to contribute to the growing research field of adaptive CA design. Moreover, our results will provide guidance for developers on how to facilitate a pleasing user experience by adapting the CA design to users
    corecore