1,260 research outputs found

    Climate change management: a resilience strategy for flood risk using Blockchain tools

    Get PDF
    This work aims to offer a contribution in the analysis and management, from an economic and financial point of view, of the flood risk, and extended to the hydrogeological risk, from the perspective of a public administration. As main responsible actor for containing the phenomenon through the maintenance of the territory, public administration is responsible for the cost of restoring of the services that have been damaged by this type of phenomenon. The assets of which the public administration must ensure the restoration are all public infrastructures (i.e. transportation, energy and water supply system, communication) together with the damage suffered by private property, if these affect services to be guaranteed to the population. In this work, the authors propose possible strategies that a public administration can put in place to deal with flood risk. Three main strategies are analysed: an absolute passivity that provides for the payment of damages as they occur (i.e. business-as-usual scenario), a classic insurance scheme, a resilient and innovative insurance scheme. The economic–financial profiles of these strategies proposed in this work put an emphasis on how the assumption of a time horizon can change the convenience of one strategy compared to the others. This study highlights the key role of the quantification of flood risk mitigation measure from an engineering perspective, and their potential issues to pursue these objectives in connection to the regulatory framework of the public administrations. This synergy is supported by the potential use of Blockchain-based tools. Within the paper is highlighted the key role that such platform IT data management platform could have within risk analysis and management schemes, both as a data collection tool and as certification of the various steps necessary to complete the process

    COVID-19 Effects on Cultural Heritage: The Case of Villa Adriana and Villa D'Este

    Get PDF
    The paper aims to provide a clarification of assessing insurance risk related to an asset owned by a subject under public law and, more specifically, to an economic cultural asset. This study is aligned with key aspects proposed by the EU for the protection of the cultural heritage from natural disasters. In the first place, given the peculiarity of the material inherent to cultural heritage, a motivation underlies the search for the correlation between the latter and the commonality. Secondly, it appeared necessary to verify the differences, similarities and importance of the economic management of cultural heritage in order to understand the social, economic, material and intangible importance of an asset managed in an economic way within a social axis (municipality). The third reason relates to the general severity and the risk and subsequent damage that a hazard, such as a pandemic outbreak (COVID-19), can cause on one or more cultural heritage. In the final analysis, perhaps the most meaningful aspect underlies the verification of the possible consequences in the analysis of summations of losses generated by a hazard in order to allow a prospect of what could be the consequences of such a catastrophic scenario

    Non-Incomes Risk Mitigation Mechanisms for Cultural Heritage: Role of Insurances Facing Covid-19 in the Italian Context

    Get PDF
    Abstract The economic cultural heritages are exposed to several natural and nowadays biological hazards, which, in addition to causing potential structural damage, can lead to severe loss deriving from financial non-incomes. The paper aims to highlight the role of insurance in mitigating financial damages and losses, specifically explaining the key role of insurance in mitigating biological hazards like Covid-19. The paper is part of broader research by the authors and uses the assumptions and results already obtained previously in the context of the case study relating to the asset of Villa Adriana and Villa D'Este

    Search for supersymmetry in events with opposite-sign dileptons and missing transverse energy using an artificial neural network

    Get PDF
    In this paper, a search for supersymmetry (SUSY) is presented in events with two opposite-sign isolated leptons in the final state, accompanied by hadronic jets and missing transverse energy. An artificial neural network is employed to discriminate possible SUSY signals from a standard model background. The analysis uses a data sample collected with the CMS detector during the 2011 LHC run, corresponding to an integrated luminosity of 4.98  fb-1 of proton-proton collisions at the center-of-mass energy of 7 TeV. Compared to other CMS analyses, this one uses relaxed criteria on missing transverse energy (E̸T>40  GeV) and total hadronic transverse energy (HT>120  GeV), thus probing different regions of parameter space. Agreement is found between standard model expectation and observations, yielding limits in the context of the constrained minimal supersymmetric standard model and on a set of simplified model

    Observing the Evolution of the Universe

    Full text link
    How did the universe evolve? The fine angular scale (l>1000) temperature and polarization anisotropies in the CMB are a Rosetta stone for understanding the evolution of the universe. Through detailed measurements one may address everything from the physics of the birth of the universe to the history of star formation and the process by which galaxies formed. One may in addition track the evolution of the dark energy and discover the net neutrino mass. We are at the dawn of a new era in which hundreds of square degrees of sky can be mapped with arcminute resolution and sensitivities measured in microKelvin. Acquiring these data requires the use of special purpose telescopes such as the Atacama Cosmology Telescope (ACT), located in Chile, and the South Pole Telescope (SPT). These new telescopes are outfitted with a new generation of custom mm-wave kilo-pixel arrays. Additional instruments are in the planning stages.Comment: Science White Paper submitted to the US Astro2010 Decadal Survey. Full list of 177 author available at http://cmbpol.uchicago.ed

    A chemical survey of exoplanets with ARIEL

    Get PDF
    Thousands of exoplanets have now been discovered with a huge range of masses, sizes and orbits: from rocky Earth-like planets to large gas giants grazing the surface of their host star. However, the essential nature of these exoplanets remains largely mysterious: there is no known, discernible pattern linking the presence, size, or orbital parameters of a planet to the nature of its parent star. We have little idea whether the chemistry of a planet is linked to its formation environment, or whether the type of host star drives the physics and chemistry of the planet’s birth, and evolution. ARIEL was conceived to observe a large number (~1000) of transiting planets for statistical understanding, including gas giants, Neptunes, super-Earths and Earth-size planets around a range of host star types using transit spectroscopy in the 1.25–7.8 μm spectral range and multiple narrow-band photometry in the optical. ARIEL will focus on warm and hot planets to take advantage of their well-mixed atmospheres which should show minimal condensation and sequestration of high-Z materials compared to their colder Solar System siblings. Said warm and hot atmospheres are expected to be more representative of the planetary bulk composition. Observations of these warm/hot exoplanets, and in particular of their elemental composition (especially C, O, N, S, Si), will allow the understanding of the early stages of planetary and atmospheric formation during the nebular phase and the following few million years. ARIEL will thus provide a representative picture of the chemical nature of the exoplanets and relate this directly to the type and chemical environment of the host star. ARIEL is designed as a dedicated survey mission for combined-light spectroscopy, capable of observing a large and well-defined planet sample within its 4-year mission lifetime. Transit, eclipse and phase-curve spectroscopy methods, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allow us to measure atmospheric signals from the planet at levels of 10–100 part per million (ppm) relative to the star and, given the bright nature of targets, also allows more sophisticated techniques, such as eclipse mapping, to give a deeper insight into the nature of the atmosphere. These types of observations require a stable payload and satellite platform with broad, instantaneous wavelength coverage to detect many molecular species, probe the thermal structure, identify clouds and monitor the stellar activity. The wavelength range proposed covers all the expected major atmospheric gases from e.g. H2O, CO2, CH4 NH3, HCN, H2S through to the more exotic metallic compounds, such as TiO, VO, and condensed species. Simulations of ARIEL performance in conducting exoplanet surveys have been performed – using conservative estimates of mission performance and a full model of all significant noise sources in the measurement – using a list of potential ARIEL targets that incorporates the latest available exoplanet statistics. The conclusion at the end of the Phase A study, is that ARIEL – in line with the stated mission objectives – will be able to observe about 1000 exoplanets depending on the details of the adopted survey strategy, thus confirming the feasibility of the main science objectives.Peer reviewedFinal Published versio

    Search for quark contact interactions and extra spatial dimensions using dijet angular distributions in proton-proton collisions at sqrt(s) = 8 TeV

    Get PDF
    A search is presented for quark contact interactions and extra spatial dimensions in proton-proton collisions at sqrt(s) = 8 TeV using dijet angular distributions. The search is based on a data set corresponding to an integrated luminosity of 19.7 inverse femtobarns collected by the CMS detector at the CERN LHC. Dijet angular distributions are found to be in agreement with the perturbative QCD predictions that include electroweak corrections. Limits on the contact interaction scale from a variety of models at next-to-leading order in QCD corrections are obtained. A benchmark model in which only left-handed quarks participate is excluded up to a scale of 9.0 (11.7) TeV for destructive (constructive) interference at 95% confidence level. Lower limits between 5.9 and 8.4 TeV on the scale of virtual graviton exchange are extracted for the Arkani-Hamed--Dimopoulos--Dvali model of extra spatial dimensions

    Probing color coherence effects in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    A study of color coherence effects in pp collisions at a center-of-mass energy of 7 TeV is presented. The data used in the analysis were collected in 2010 with the CMS detector at the LHC and correspond to an integrated luminosity of 36 inverse picobarns. Events are selected that contain at least three jets and where the two jets with the largest transverse momentum exhibit a back-to-back topology. The measured angular correlation between the second- and third-leading jet is shown to be sensitive to color coherence effects, and is compared to the predictions of Monte Carlo models with various implementations of color coherence. None of the models describe the data satisfactorily

    Search for the associated production of the Higgs boson with a top-quark pair

    Get PDF
    A search for the standard model Higgs boson produced in association with a top-quark pair (t tbar H) is presented, using data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns and 19.7 inverse femtobarns collected in pp collisions at center-of-mass energies of 7 TeV and 8 TeV respectively. The search is based on the following signatures of the Higgs boson decay: H to hadrons, H to photons, and H to leptons. The results are characterized by an observed t tbar H signal strength relative to the standard model cross section, mu = sigma/sigma[SM], under the assumption that the Higgs boson decays as expected in the standard model. The best fit value is mu = 2.8 +/- 1.0 for a Higgs boson mass of 125.6 GeV
    corecore