2,724 research outputs found

    A Taste of Cosmology

    Full text link
    This is the summary of two lectures that aim to give an overview of cosmology. I will not try to be too rigorous in derivations, nor to give a full historical overview. The idea is to provide a "taste" of cosmology and some of the interesting topics it covers. The standard cosmological model is presented and I highlight the successes of cosmology over the past decade or so. Keys to the development of the standard cosmological model are observations of the cosmic microwave background and of large-scale structure, which are introduced. Inflation and dark energy and the outlook for the future are also discussed. Slides from the lectures are available from the school website: physicschool.web.cern.ch/PhysicSchool/CLASHEP/CLASHEP2011/.Comment: 16 pages, contribution to the 2011 CERN-Latin-American School of High-Energy Physics, Natal, Brazil, 23 March-5 April 2011, edited by C. Grojean, M. Mulders and M. Spiropul

    Large-scale bias in the Universe: bispectrum method

    Get PDF
    Evidence that the Universe may be close to the critical density, required for its expansion eventually to be halted, comes principally from dynamical studies of large-scale structure. These studies either use the observed peculiar velocity field of galaxies directly, or indirectly by quantifying its anisotropic effect on galaxy clustering in redshift surveys. A potential difficulty with both such approaches is that the density parameter Ω0\Omega_0 is obtained only in the combination ÎČ=Ω00.6/b\beta = \Omega_0^{0.6}/b, if linear perturbation theory is used. The determination of the density parameter Ω0\Omega_0 is therefore compromised by the lack of a good measurement of the bias parameter bb, which relates the clustering of sample galaxies to the clustering of mass. In this paper, we develop an idea of Fry (1994), using second-order perturbation theory to investigate how to measure the bias parameter on large scales. The use of higher-order statistics allows the degeneracy between bb and Ω0\Omega_0 to be lifted, and an unambiguous determination of Ω0\Omega_0 then becomes possible. We apply a likelihood approach to the bispectrum, the three-point function in Fourier space. This paper is the first step in turning the idea into a practical proposition for redshift surveys, and is principally concerned with noise properties of the bispectrum, which are non-trivial. The calculation of the required bispectrum covariances involves the six-point function, including many noise terms, for which we have developed a generating functional approach which will be of value in calculating high-order statistics in general.Comment: 12 pages, latex, 7 postscript figures included. Accepted by MNRAS. (Minor numerical typesetting errors corrected: results unchanged

    The nonlinear redshift-space power spectrum of galaxies

    Get PDF
    We study the power spectrum of galaxies in redshift space, with third order perturbation theory to include corrections that are absent in linear theory. We assume a local bias for the galaxies: i.e. the galaxy density is sampled from some local function of the underlying mass distribution. We find that the effect of the nonlinear bias in real space is to introduce two new features: first, there is a contribution to the power which is constant with wavenumber, whose nature we reveal as essentially a shot-noise term. In principle this contribution can mask the primordial power spectrum, and could limit the accuracy with which the latter might be measured on very large scales. Secondly, the effect of second- and third-order bias is to modify the effective bias (defined as the square root of the ratio of galaxy power spectrum to matter power spectrum). The effective bias is almost scale-independent over a wide range of scales. These general conclusions also hold in redshift space. In addition, we have investigated the distortion of the power spectrum by peculiar velocities, which may be used to constrain the density of the Universe. We look at the quadrupole-to-monopole ratio, and find that higher-order terms can mimic linear theory bias, but the bias implied is neither the linear bias, nor the effective bias referred to above. We test the theory with biased N-body simulations, and find excellent agreement in both real and redshift space, providing the local biasing is applied on a scale whose fractional r.m.s. density fluctuations are <0.5< 0.5.Comment: 13 pages, 7 figures. Accepted by MNRA

    Tests for primordial non-Gaussianity

    Get PDF
    We investigate the relative sensitivities of several tests for deviations from Gaussianity in the primordial distribution of density perturbations. We consider models for non-Gaussianity that mimic that which comes from inflation as well as that which comes from topological defects. The tests we consider involve the cosmic microwave background (CMB), large-scale structure (LSS), high-redshift galaxies, and the abundances and properties of clusters. We find that the CMB is superior at finding non-Gaussianity in the primordial gravitational potential (as inflation would produce), while observations of high-redshift galaxies are much better suited to find non-Gaussianity that resembles that expected from topological defects. We derive a simple expression that relates the abundance of high-redshift objects in non-Gaussian models to the primordial skewness.Comment: 6 pages, 2 figures, MNRAS in press (minor changes to match the accepted version

    Reducing sample variance: halo biasing, non-linearity and stochasticity

    Get PDF
    Comparing clustering of differently biased tracers of the dark matter distribution offers the opportunity to reduce the cosmic variance error in the measurement of certain cosmological parameters. We develop a formalism that includes bias non-linearities and stochasticity. Our formalism is general enough that can be used to optimise survey design and tracers selection and optimally split (or combine) tracers to minimise the error on the cosmologically interesting quantities. Our approach generalises the one presented by McDonald & Seljak (2009) of circumventing sample variance in the measurement of f≡dln⁥D/dln⁥af\equiv d \ln D/d\ln a. We analyse how the bias, the noise, the non-linearity and stochasticity affect the measurements of DfDf and explore in which signal-to-noise regime it is significantly advantageous to split a galaxy sample in two differently-biased tracers. We use N-body simulations to find realistic values for the parameters describing the bias properties of dark matter haloes of different masses and their number density. We find that, even if dark matter haloes could be used as tracers and selected in an idealised way, for realistic haloes, the sample variance limit can be reduced only by up to a factor σ2tr/σ1tr≃0.6\sigma_{2tr}/\sigma_{1tr}\simeq 0.6. This would still correspond to the gain from a three times larger survey volume if the two tracers were not to be split. Before any practical application one should bear in mind that these findings apply to dark matter haloes as tracers, while realistic surveys would select galaxies: the galaxy-host halo relation is likely to introduce extra stochasticity, which may reduce the gain further.Comment: 21 pages, 13 figures. Published version in MNRA

    Exploring data and model poisoning attacks to deep learning-based NLP systems

    Get PDF
    Natural Language Processing (NLP) is being recently explored also to its application in supporting malicious activities and objects detection. Furthermore, NLP and Deep Learning have become targets of malicious attacks too. Very recent researches evidenced that adversarial attacks are able to affect also NLP tasks, in addition to the more popular adversarial attacks on deep learning systems for image processing tasks. More precisely, while small perturbations applied to the data set adopted for training typical NLP tasks (e.g., Part-of-Speech Tagging, Named Entity Recognition, etc..) could be easily recognized, models poisoning, performed by the means of altered data models, typically provided in the transfer learning phase to a deep neural networks (e.g., poisoning attacks by word embeddings), are harder to be detected. In this work, we preliminary explore the effectiveness of a poisoned word embeddings attack aimed at a deep neural network trained to accomplish a Named Entity Recognition (NER) task. By adopting the NER case study, we aimed to analyze the severity of such a kind of attack to accuracy in recognizing the right classes for the given entities. Finally, this study represents a preliminary step to assess the impact and the vulnerabilities of some NLP systems we adopt in our research activities, and further investigating some potential mitigation strategies, in order to make these systems more resilient to data and models poisoning attacks

    Assessment and validation of wildfire susceptibility and hazard in Portugal

    Get PDF
    A comprehensive methodology to assess forest fire susceptibility, that uses variables of strong spatial correlation, is presented and applied for the Portuguese mainland. Our study is based on a thirty-year chronological series of burnt areas. The first twenty years (1975–1994) are used for statistical modelling, and the last ten (1995–2004) are used for the independent validation of results. The wildfire affected areas are crossed with a set of independent layers that are assumed to be relevant wildfire conditioning factors: elevation, slope, land cover, rainfall and temperature. Moreover, the wildfire recurring pattern is also considered, as a proxy variable expressing the influence of human action in wildfire occurrence. A sensitivity analysis is performed to evaluate the weight of each individual theme within the susceptibility model. Validation of the wildfire susceptibility models is made through the computation of success rate and prediction rate curves. The results show that it is possible to have a good compromise between the number of variables within the model and the model predictive power. Additionally, it is shown that integration of climatic variables does not produce any relevant increase in the prediction capacity of wildfire susceptibility models. Finally, the prediction rate curves produced by the independent cross validation are used to assess the probabilistic wildfire hazard at a scenario basis, for the complete mainland Portuguese territory

    La dermatitis alérgica a la picadura de pulga : estudio de factores epidemiológicos en el årea urbana de Zaragoza. :

    Get PDF
    Se estudiaron 101 casos seleccionados de entre los que llegaron a la consulta de dermatologia a lo largo de un año y se analizaron los resultados de un cuestionario epidemiologico que se aplico a cada uno de los casos. Despues de aplicar el tratamiento estadistico a los datos, se encontraron cuatro factores significativos (p<0, 05)que interfieren en la presentacion de la enfermedad. Tres de estos factores se consideran factores de riesgo: edad de presentacion de los primeros signos clinicos, estacion en la qeu se manifiesta el prurito e infestacion por pulgas. El control de las pulgas se manifesto como un factor de proteccion

    Evolution of the decay mechanisms in central collisions of XeXe + SnSn from E/AE/A = 8 to 29 MeVMeV

    Full text link
    Collisions of Xe+Sn at beam energies of E/AE/A = 8 to 29 MeVMeV and leading to fusion-like heavy residues are studied using the 4π4\pi INDRA multidetector. The fusion cross section was measured and shows a maximum at E/AE/A = 18-20 MeVMeV. A decomposition into four exit-channels consisting of the number of heavy fragments produced in central collisions has been made. Their relative yields are measured as a function of the incident beam energy. The energy spectra of light charged particles (LCP) in coincidence with the fragments of each exit-channel have been analyzed. They reveal that a composite system is formed, it is highly excited and first decays by emitting light particles and then may breakup into 2- or many- fragments or survives as an evaporative residue. A quantitative estimation of this primary emission is given and compared to the secondary decay of the fragments. These analyses indicate that most of the evaporative LCP precede not only fission but also breakup into several fragments.Comment: Invited Talk given at the 11th International Conference on Nucleus-Nucleus Collisions (NN2012), San Antonio, Texas, USA, May 27-June 1, 2012. To appear in the NN2012 Proceedings in Journal of Physics: Conference Series (JPCS

    Measuring carbon in cities and their buildings through reverse engineering of life cycle assessment

    Get PDF
    According to the European Green Deal, excessive carbon emissions are the origin of global warming and must be drastically reduced. Given that the building sector is one of the major sources of carbon emissions, results imperative to limit these emissions, especially in a city context where the density of buildings is commonly higher and rapidly increasing. All stages of the life cycle of a building, including raw material harvesting, manufacturing of products, use phase of the building, end of life, all generate or reduce carbon. The manufacture of construction materials accounts for 11% of all energy and process-related emissions annually. Additionally, recent estimates indicate that over 80% of all product-related environmental impacts of a building are determined during the design phase of the building. These indicators reflect the urgent need to explore a low-carbon measure method for building design. This is here done using a linear regression Reverse Engineering model and percentage calculation. One of the hypotheses formulated relates Global Warming Potential (GWP) of −30.000 CO2eq or lower (around −165 CO2eq/m2) in the 25% of a block of houses, to carbon further reductions by 11%. This paper has identified barriers in terms of the databases needed to achieve this task.- (undefined
    • 

    corecore