635 research outputs found

    Structure Formation With a Long-Range Scalar Dark Matter Interaction

    Full text link
    Numerical simulations show that a long-range scalar interaction in a single species of massive dark matter particles causes voids between the concentrations of large galaxies to be more nearly empty, suppresses accretion of intergalactic matter onto galaxies at low redshift, and produces an early generation of dense dark matter halos. These three effects, in moderation, seem to be improvements over the Lambda CDM model predictions for cosmic structure formation. Because the scalar interaction in this model has negligible effect on laboratory physics and the classical cosmological tests, it offers an observationally attractive example of cosmology with complicated physics in the dark sector, notably a large violation of the weak equivalence principle.Comment: 10 pages, 7 figures, revtex4. v2: minor improvements, refs added, version to appear in PR

    Galaxy Satellites and the Weak Equivalence Principle

    Full text link
    Numerical simulations of the effect of a long-range scalar interaction (LRSI) acting only on nonbaryonic dark matter, with strength comparable to gravity, show patterns of disruption of satellites that can agree with what is seen in the Milky Way. This includes the symmetric Sagittarius stellar stream. The exception presented here to the Kesden and Kamionkowski demonstration that an LRSI tends to produce distinctly asymmetric streams follows if the LRSI is strong enough to separate the stars from the dark matter before tidal disruption of the stellar component, and if stars dominate the mass in the luminous part of the satellite. It requires that the Sgr galaxy now contains little dark matter, which may be consistent with the Sgr stellar velocity dispersion, for in the simulation the dispersion at pericenter exceeds virial. We present other examples of simulations in which a strong LRSI produces satellites with large mass-to-light ratio, as in Draco, or free streams of stars, which might be compared to "orphan" streams.Comment: 14 pages, accepted for publication in PR

    Squeezing MOND into a Cosmological Scenario

    Full text link
    Explaining the effects of dark matter using modified gravitational dynamics (MOND) has for decades been both an intriguing and controversial possibility. By insisting that the gravitational interaction that accounts for the Newtonian force also drives cosmic expansion, one may kinematically identify which cosmologies are compatible with MOND, without explicit reference to the underlying theory so long as the theory obeys Birkhoff's law. Using this technique, we are able to self-consistently compute a number of quantities of cosmological interest. We find that the critical acceleration a_0 must have a slight source-mass dependence (a_0 ~ M^(1/3)) and that MOND cosmologies are naturally compatible with observed late-time expansion history and the contemporary cosmic acceleration. However, cosmologies that can produce enough density perturbations to account for structure formation are contrived and fine-tuned. Even then, they may be marginally ruled out by evidence of early (z \~ 20) reionization.Comment: 11 pages revtex, 2 figure

    Maximum-Likelihood Comparisons of Tully-Fisher and Redshift Data: Constraints on Omega and Biasing

    Full text link
    We compare Tully-Fisher (TF) data for 838 galaxies within cz=3000 km/sec from the Mark III catalog to the peculiar velocity and density fields predicted from the 1.2 Jy IRAS redshift survey. Our goal is to test the relation between the galaxy density and velocity fields predicted by gravitational instability theory and linear biasing, and thereby to estimate βI=Ω0.6/bI,\beta_I = \Omega^{0.6}/b_I, where bIb_I is the linear bias parameter for IRAS galaxies. Adopting the IRAS velocity and density fields as a prior model, we maximize the likelihood of the raw TF observables, taking into account the full range of selection effects and properly treating triple-valued zones in the redshift-distance relation. Extensive tests with realistic simulated galaxy catalogs demonstrate that the method produces unbiased estimates of βI\beta_I and its error. When we apply the method to the real data, we model the presence of a small but significant velocity quadrupole residual (~3.3% of Hubble flow), which we argue is due to density fluctuations incompletely sampled by IRAS. The method then yields a maximum likelihood estimate βI=0.49±0.07\beta_I=0.49\pm 0.07 (1-sigma error). We discuss the constraints on Ω\Omega and biasing that follow if we assume a COBE-normalized CDM power spectrum. Our model also yields the 1-D noise noise in the velocity field, including IRAS prediction errors, which we find to be be 125 +/- 20 km/sec.Comment: 53 pages, 20 encapsulated figures, two tables. Submitted to the Astrophysical Journal. Also available at http://astro.stanford.edu/jeff

    Reconstruction Analysis of Galaxy Redshift Surveys: A Hybrid Reconstruction Method

    Full text link
    In reconstruction analysis of galaxy redshift surveys, one works backwards from the observed galaxy distribution to the primordial density field in the same region, then evolves the primordial fluctuations forward in time with an N-body code. This incorporates assumptions about the cosmological parameters, the properties of primordial fluctuations, and the biasing relation between galaxies and mass. These can be tested by comparing the reconstruction to the observed galaxy distribution, and to peculiar velocity data. This paper presents a hybrid reconstruction method that combines the `Gaussianization'' technique of Weinberg(1992) with the dynamical schemes of Nusser & Dekel(1992) and Gramann(1993). We test the method on N-body simulations and on N-body mock catalogs that mimic the depth and geometry of the Point Source Catalog Redshift Survey and the Optical Redshift Survey. This method is more accurate than Gaussianization or dynamical reconstruction alone. Matching the observed morphology of clustering can limit the bias factor b, independent of Omega. Matching the cluster velocity dispersions and z-space distortions of the correlation function xi(s,mu) constrains the parameter beta=Omega^{0.6}/b. Relative to linear or quasi-linear approximations, a fully non-linear reconstruction makes more accurate predictions of xi(s,mu) for a given beta, thus reducing the systematic biases of beta measurements and offering further scope for breaking the degeneracy between Omega and b. It also circumvents the cosmic variance noise that limits conventional analyses of xi(s,mu). It can also improve the determination of Omega and b from joint analyses of redshift & peculiar velocity surveys as it predicts the fully non-linear peculiar velocity distribution at each point in z-space.Comment: 72 pages including 33 figures, submitted to Ap

    Using Perturbative Least Action to Reconstruct Redshift Space Distortions

    Get PDF
    In this paper, we present a redshift space reconstruction scheme which is analogous to and extends the Perturbative Least Action (PLA) method described by Goldberg & Spergel (2000). We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the errorbars on the redshift distortion parameter, β\beta as well as to break the degeneracy between the linear bias parameter, bb, and ΩM\Omega_M. Finally, we discuss how PLA might be applied to realistic redshift surveys.Comment: 34 Pages LaTeX, including 10 postscript figures. Submitted to Astrophysical Journa

    Evidence for a Positive Cosmological Constant from Flows of Galaxies and Distant Supernovae

    Full text link
    Recent observations of high-redshift supernovae seem to suggest that the global geometry of the Universe may be affected by a `cosmological constant', which acts to accelerate the expansion rate with time. But these data by themselves still permit an open universe of low mass density and no cosmological constant. Here we derive an independent constraint on the lower bound to the mass density, based on deviations of galaxy velocities from a smooth universal expansion. This constraint rules out a low-density open universe with a vanishing cosmological constant, and together the two favour a nearly flat universe in which the contributions from mass density and the cosmological constant are comparable. This type of universe, however, seems to require a degree of fine tuning of the initial conditions that is in apparent conflict with `common wisdom'.Comment: 8 pages, 1 figure. Slightly revised version. Letter to Natur

    How accurately can 21 cm tomography constrain cosmology?

    Full text link
    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe, as it has been argued to have a greater long-term potential than the cosmic microwave background. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions. We cover assumptions related to modeling of the ionization power spectrum and associated nonlinearity, experimental specifications like array layout and noise, cosmological assumptions about reionization history and inter-galactic medium (IGM) evolution, and assumptions about astrophysical foregrounds. We derive simple analytic approximations for how various assumptions affect the results, and find that ionization power modeling is most important, followed by array layout (crudely, the more compact, the better). We also present an accurate yet robust method for measuring cosmological parameters in practice, separating the physics from the astrophysics by exploiting both gravitationally induced clustering anisotropy and the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. For example, a future square kilometer array optimized for 21 cm tomography could improve the sensitivity of the Planck CMB satellite to spatial curvature and neutrino masses by up to two orders of magnitude, to Delta-Omega_k ~ 0.0002 and Delta m_nu ~ 0.007 eV, and give a 4 sigma detection of the spectral index running predicted by the simpliest inflation models.Comment: 20 PRD pages, 9 figures, 13 tables, matches published PRD version, including new explanatory material

    Einstellungen und Selbstwirksamkeit von Lehrerinnen und Lehrern zur schulischen Inklusion in Deutschland - eine Analyse mit Daten des Nationalen Bildungspanels Deutschlands (NEPS)

    Get PDF
    In Deutschland wird der gemeinsame Unterricht von Schülerinnen und Schülern mit und ohne sonderpädagogischem Förderbedarf im Schulsystem stark ausgebaut. Als eine wichtige Voraussetzung für eine gelungene Umsetzung des gemeinsamen Unterrichts wird in der Forschung die Einstellungen der beteiligten Lehrerinnen und Lehrer zur Inklusion gesehen. Der vorliegende Bei- trag berichtet über die Selbstwirksamkeit und die allgemeine Einstellung zur Inklusion bei 130 Klassenlehrkräften in der der 2. Welle der Startkohorte 3 (6. Jahrgangsstufe) des Nationalen Bildungspanels (NEPS). Insgesamt haben die Regelschullehrkräfte eine positive Einstellung gegenüber der Inklusion, jedoch ist die Selbstwirksamkeit in Bezug auf Inklusion eher gering ausgeprägt. Die befragten Klassenlehrkräfte in Förderschulen trauen sich dagegen die Umsetzung des gemeinsamen Unterrichts eher zu. Gegenüber der Inklusion haben sie jedoch Bedenken und halten die Förderschule für den optimaleren Förderort für Schülerinnen und Schüler mit sonderpädagogischem Förderbedarf
    • …
    corecore