1,653 research outputs found

    Intrinsic galaxy shapes and alignments II: Modelling the intrinsic alignment contamination of weak lensing surveys

    Get PDF
    Intrinsic galaxy alignments constitute the major astrophysical systematic of forthcoming weak gravitational lensing surveys but also yield unique insights into galaxy formation and evolution. We build analytic models for the distribution of galaxy shapes based on halo properties extracted from the Millennium Simulation, differentiating between early- and late-type galaxies as well as central galaxies and satellites. The resulting ellipticity correlations are investigated for their physical properties and compared to a suite of current observations. The best-faring model is then used to predict the intrinsic alignment contamination of planned weak lensing surveys. We find that late-type galaxy models generally have weak intrinsic ellipticity correlations, marginally increasing towards smaller galaxy separation and higher redshift. The signal for early-type models at fixed halo mass strongly increases by three orders of magnitude over two decades in galaxy separation, and by one order of magnitude from z=0 to z=2. The intrinsic alignment strength also depends strongly on halo mass, but not on galaxy luminosity at fixed mass, or galaxy number density in the environment. We identify models that are in good agreement with all observational data, except that all models over-predict alignments of faint early-type galaxies. The best model yields an intrinsic alignment contamination of a Euclid-like survey between 0.5-10% at z>0.6 and on angular scales larger than a few arcminutes. Cutting 20% of red foreground galaxies using observer-frame colours can suppress this contamination by up to a factor of two.Comment: 23 pages, 14 figures; minor changes to match version published in MNRA

    Charmonium-Nucleon Dissociation Cross Sections in the Quark Model

    Full text link
    Charmonium dissociation cross sections due to flavor-exchange charmonium-baryon scattering are computed in the constituent quark model. We present results for inelastic J/ψNJ/\psi N and ηcN\eta_c N scattering amplitudes and cross sections into 46 final channels, including final states composed of various combinations of DD, DD^*, Σc\Sigma_c, and Λc\Lambda_c. These results are relevant to experimental searches for the deconfined phase of quark matter, and may be useful in identifying the contribution of initial ccˉc\bar c production to the open-charm final states observed at RHIC through the characteristic flavor ratios of certain channels. These results are also of interest to possible charmonium-nucleon bound states.Comment: 10 pages, 5 eps figures, revte

    Luminosity distance in Swiss cheese cosmology with randomized voids. II. Magnification probability distributions

    Full text link
    We study the fluctuations in luminosity distances due to gravitational lensing by large scale (> 35 Mpc) structures, specifically voids and sheets. We use a simplified "Swiss cheese" model consisting of a \Lambda -CDM Friedman-Robertson-Walker background in which a number of randomly distributed non-overlapping spherical regions are replaced by mass compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz & Wald (1998), which includes the effect of lensing shear. The standard deviation of this distribution is ~ 0.027 magnitudes and the mean is ~ 0.003 magnitudes for voids of radius 35 Mpc, sources at redshift z_s=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ~ 1 Mpc, the standard deviation is reduced to ~ 0.013 magnitudes. This standard deviation due to voids is a factor ~ 3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ~ 20%, and also build a simplified analytic model that reproduces our results to within ~ 30%. Our model also allows us to explore the domain of validity of weak lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ~ 4%, and corrections to due shear are ~ 3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible

    Persistence of a pinch in a pipe

    Full text link
    The response of low-dimensional solid objects combines geometry and physics in unusual ways, exemplified in structures of great utility such as a thin-walled tube that is ubiquitous in nature and technology. Here we provide a particularly surprising consequence of this confluence of geometry and physics in tubular structures: the anomalously large persistence of a localized pinch in an elastic pipe whose effect decays very slowly as an oscillatory exponential with a persistence length that diverges as the thickness of the tube vanishes, which we confirm experimentally. The result is more a consequence of geometry than material properties, and is thus equally applicable to carbon nanotubes as it is to oil pipelines.Comment: 6 pages, 3 figure

    Effects of nitrogen supply on must quality and anthocyanin accumulation in berries of cv. Merlot

    Get PDF
    Nitrogen supply to Merlot vines (Vitis vinifera L.), grown under controlled conditions, affected must quality and the anthocyanin content in berry skins irrespective of vegetative growth. High N supply delayed fruit maturation; berries had a higher arginine and a lower anthocyanin content with relatively more abundant acylated anthocyanins compared to berries of vines supplied with low N. During maturation the anthocyanin content in the skin of berries decreased; this was more significant in high-N vines. It is concluded that high nitrogen supply affects the metabolic pathway of anthocyanins in different ways, e.g. it delays the quantitative and qualitative biosynthesis and enhances their degradation during the final steps of berry maturation.

    The Coagulation Box and a New Hemoglobin-Driven Algorithm for Bleeding Control in Patients with Severe Multiple Traumas

    Get PDF
    Background: Extensive hemorrhage is the leading cause of death in the first few hours following multiple traumas. Therefore, early and aggressive treatment of clotting disorders could reduce mortality. Unfortunately, the availability of results from commonly performed blood coagulation studies are often delayed whereas hemoglobin (Hb) levels are quickly available. Objectives: In this study, we evaluated the use of initial hemoglobin (Hb) levels as a guide line for the initial treatment of clotting disorders in multiple trauma patients. Patients and Methods: We have developed an Hb-driven algorithm to initiate the initial clotting therapy. The algorithm contains three different steps for aggressive clotting therapy depending on the first Hb value measured in the shock trauma room, (SR) and utilizes fibrinogen, prothrombin complex concentrate (PCC), factor VIIa, tranexamic acid and desmopressin. The above-mentioned drugs were stored in a special “coagulation box” in the hospital pharmacy, and this box could be immediately brought to the SR or operating room (OR) upon request. Despite the use of clotting factors, transfusions using red blood cells (RBC) and fresh frozen plasma (FFP) were performed at an RBC-to-FFP ratio of 2:1 to 1:1. Results: Over a 12-month investigation period, 123 severe multiple trauma patients needing intensive care therapy were admitted to our trauma center (mean age 48 years, mean ISS (injury severity score) 30). Fourteen (11%) patients died; 25 (mean age 51.5 years, mean ISS 53) of the 123 patients were treated using the “coagulation box,” and 17 patients required massive transfusions. Patients treated with the “coagulation box” required an average dose of 16.3 RBC and 12.9 FFP, whereas 17 of the 25 patients required an average dose of 3.6 platelet packs. According to the algorithm, 25 patients received fibrinogen (average dose of 8.25 g), 24 (96%) received PCC (3000 IU.), 14 (56%) received desmopressin (36.6 µg), 13 (52%) received tranexamic acid (2.88 g), and 11 (44%) received factor VIIa (3.7 mg). The clotting parameters markedly improved between SR admission and ICU admission. Of the 25 patients, 16 (64%) survived. The revised injury severity classification (RISC) predicted a survival rate of 41%, which corresponds to a standardized mortality ratio (SMR) of 0.62, which implies a higher survival rate than predicted. Conclusions: An Hb-driven algorithm, in combination with the “coagulation box” and the early use of clotting factors, could be a simple and effective tool for improving coagulopathy in multiple trauma patients

    Noether symmetries, energy-momentum tensors and conformal invariance in classical field theory

    Full text link
    In the framework of classical field theory, we first review the Noether theory of symmetries, with simple rederivations of its essential results, with special emphasis given to the Noether identities for gauge theories. Will this baggage on board, we next discuss in detail, for Poincar\'e invariant theories in flat spacetime, the differences between the Belinfante energy-momentum tensor and a family of Hilbert energy-momentum tensors. All these tensors coincide on shell but they split their duties in the following sense: Belinfante's tensor is the one to use in order to obtain the generators of Poincar\'e symmetries and it is a basic ingredient of the generators of other eventual spacetime symmetries which may happen to exist. Instead, Hilbert tensors are the means to test whether a theory contains other spacetime symmetries beyond Poincar\'e. We discuss at length the case of scale and conformal symmetry, of which we give some examples. We show, for Poincar\'e invariant Lagrangians, that the realization of scale invariance selects a unique Hilbert tensor which allows for an easy test as to whether conformal invariance is also realized. Finally we make some basic remarks on metric generally covariant theories and classical field theory in a fixed curved bakground.Comment: 31 pa

    An algorithm for the direct reconstruction of the dark matter correlation function from weak lensing and galaxy clustering

    Full text link
    The clustering of matter on cosmological scales is an essential probe for studying the physical origin and composition of our Universe. To date, most of the direct studies have focused on shear-shear weak lensing correlations, but it is also possible to extract the dark matter clustering by combining galaxy-clustering and galaxy-galaxy-lensing measurements. In this study we develop a method that can constrain the dark matter correlation function from galaxy clustering and galaxy-galaxy-lensing measurements, by focusing on the correlation coefficient between the galaxy and matter overdensity fields. To generate a mock galaxy catalogue for testing purposes, we use the Halo Occupation Distribution approach applied to a large ensemble of N-body simulations to model pre-existing SDSS Luminous Red Galaxy sample observations. Using this mock catalogue, we show that a direct comparison between the excess surface mass density measured by lensing and its corresponding galaxy clustering quantity is not optimal. We develop a new statistic that suppresses the small-scale contributions to these observations and show that this new statistic leads to a cross-correlation coefficient that is within a few percent of unity down to 5 Mpc/h. Furthermore, the residual incoherence between the galaxy and matter fields can be explained using a theoretical model for scale-dependent bias, giving us a final estimator that is unbiased to within 1%. We also perform a comprehensive study of other physical effects that can affect the analysis, such as redshift space distortions and differences in radial windows between galaxy clustering and weak lensing observations. We apply the method to a range of cosmological models and show the viability of our new statistic to distinguish between cosmological models.Comment: 23 pages, 14 figures, accepted by PRD; minor changes to V1, 1 new figure, more detailed discussion of the covariance of the new ADSD statisti

    A bias in cosmic shear from galaxy selection: results from ray-tracing simulations

    Full text link
    We identify and study a previously unknown systematic effect on cosmic shear measurements, caused by the selection of galaxies used for shape measurement, in particular the rejection of close (blended) galaxy pairs. We use ray-tracing simulations based on the Millennium Simulation and a semi-analytical model of galaxy formation to create realistic galaxy catalogues. From these, we quantify the bias in the shear correlation functions by comparing measurements made from galaxy catalogues with and without removal of close pairs. A likelihood analysis is used to quantify the resulting shift in estimates of cosmological parameters. The filtering of objects with close neighbours (a) changes the redshift distribution of the galaxies used for correlation function measurements, and (b) correlates the number density of sources in the background with the density field in the foreground. This leads to a scale-dependent bias of the correlation function of several percent, translating into biases of cosmological parameters of similar amplitude. This makes this new systematic effect potentially harmful for upcoming and planned cosmic shear surveys. As a remedy, we propose and test a weighting scheme that can significantly reduce the bias.Comment: 9 pages, 9 figures, version accepted for publication in Astronomy & Astrophysic

    On the relation between the Feynman paradox and Aharonov-Bohm effects

    Get PDF
    The magnetic Aharonov-Bohm (A-B) effect occurs when a point charge interacts with a line of magnetic flux, while its dual, the Aharonov-Casher (A-C) effect, occurs when a magnetic moment interacts with a line of charge. For the two interacting parts of these physical systems, the equations of motion are discussed in this paper. The generally accepted claim is that both parts of these systems do not accelerate, while Boyer has claimed that both parts of these systems do accelerate. Using the Euler-Lagrange equations we predict that in the case of unconstrained motion only one part of each system accelerates, while momentum remains conserved. This prediction requires a time dependent electromagnetic momentum. For our analysis of unconstrained motion the A-B effects are then examples of the Feynman paradox. In the case of constrained motion, the Euler-Lagrange equations give no forces in agreement with the generally accepted analysis. The quantum mechanical A-B and A-C phase shifts are independent of the treatment of constraint. Nevertheless, experimental testing of the above ideas and further understanding of A-B effects which is central to both quantum mechanics and electromagnetism may be possible.Comment: 21 pages, 5 figures, recently submitted to New Journal of Physic
    corecore