160 research outputs found

    Evaluating G2G for use in Rapid Response Catchments: Final Report

    Get PDF
    Flood impacts can be severe for rapid response catchments (RRCs). Providing targeted flood warnings is challenging using existing methodologies and on account of the typical absence of river flow gauging. The Pitt Review of the Summer 2007 floods recognised the need for new alert procedures for RRCs able to exploit the new distributed flood forecasting capability being progressed from research into operations. Work on the G2G (Grid-to-Grid) distributed hydrological model was accelerated into operational practice to support 5-day countrywide flood outlooks, a major recommendation of the Pitt Review. The present study aims to explore the potential of G2G to support more frequent and detailed alerts relevant to flood warning in RRCs. Integral to this study is the use of emerging rainfall forecast products, in deterministic and ensemble form, which allow the lead-time of G2G flow forecasts to be extended and given an uncertainty context. This Report sets down the overall scope of the project, provides an introduction to G2G by way of background and then reports on the outcomes of the R&D study. This includes extensive preparatory work on collating historical datasets to support G2G model assessment, both relating to hydrometry and new rainfall forecast products. A framework is developed for assessing G2G in both simulation-mode and forecast-mode (as a function of lead-time) targeted at the RRC requirement. Relevant to the requirement is the RRC Register of points and areas of interest compiled by the Environment Agency, and the characteristics of RRCs (occurring in isolation or in combination): small catchment area, urban/sub-urban land-cover and steep slopes. The assessment framework is first applied assuming perfect knowledge of rainfall observations for past and future times, so as not to confound the analysis with errors from rainfall forecasts. Variability of performance measures across groups of sites is summarised through box and whisker plots, groups being differentiated on size of catchment area and nature of G2G run (simulation, and with the addition of state updating and flow insertion in turn). Skill scores judge how well the model performs in detecting a flood event exceeding a flow threshold, taken as the median annual flood (as an indicator of bankfull flow exceedance for natural channels) and fractional multipliers of it. The skill scores include POD (Probability of Detection) and FAR (False Alarm Ratio). Performance maps of R2 Efficiency, indicating the variability in the observations accounted for by the model, are used to portray the spatial variability of G2G accuracy across the country. G2G performance in small catchments, relevant to the RRC requirement, is best over South West, North East and North West regions; also median performance appears robust from one year to the next. Larger catchments benefit most in forecast-mode from flow insertion, whilst smaller headwater catchments gain particularly from ARMA (AutoRegressive Moving Average) error-prediction. An assessment is made of using deterministic rainfall forecasts from NWP UKV - the Numerical Weather Prediction UK Variable Resolution form of the Met Office Unified Model - in a full emulation of G2G in real-time, and using foreknowledge of rainfall observations as a reference baseline. Forecast quality can deteriorate strongly beyond 12 hours, especially for smaller catchments, whilst for some locations good performance is maintained even for long lead-times. Diagnostic analysis reveals that the UKV rainfall forecasts have patterns of overestimation in some lowland areas (e.g. over London) and leeward of high elevation areas (e.g. north and south Pennines). Overall performance is better in Scotland although there is evidence of UKV overestimating rainfall near the coast at Edinburgh and Elgin in the north. The assessment framework is extended to include rainfall forecast ensembles and probabilistic flood forecasting, using a combination of case-study and longer-term analyses. Blended Ensemble rainfall forecasts are assessed in two forms: forecasts out to 24 hours updated 4 times a day, and nowcasts out to 7 hours updated every 15 minutes. The 24 hour forecasts generally perform well as input to G2G in the case studies, the G2G flow forecasts typically signalling a flood peak 12 to 18 hours in advance and ahead of any observed response for small catchments. New regional summary map displays of the probability of flow threshold exceedances over a forecast horizon, and for increasing levels of severity, are developed to highlight evolving hotspots of flood risk over time. The first ever continuous assessment of G2G probability flow forecasts is reported using national maps of probabilistic skill scores - Relative Operating Characteristic (ROC) Skill Score and Brier Skill Score (BSS) - to spatially assess their performance. It is noted that the short periods available for assessment - a 7Âœ month period over England & Wales and 4 Âœ months over Scotland - limit the analyses to low return period flow thresholds. Half the median (2-year) flood is used although a regional pooled analysis allows some assessment up to 5-year. The G2G probability forecast assessed is the probability of the chosen flow threshold being exceeded at any time over the forecast horizon (taken to be 24 hours). Comparison of these scores when applied to deterministic and probabilistic forecasts from G2G provides strong evidence of the value of G2G ensemble forecasts as an indicator of flood risk over Britain. Noticeably poorer performance indicated by the BSS across Scotland is in part attributed to the short, summer-dominated assessment period. Operational tools available to FFC and the SFFS for using G2G flow ensembles are reviewed and options for improvement identified drawing on the experience and findings of the study. This leads to identifying some work of an operational nature for consideration in Phase 3 of the project. The report closes with a summary of project achievements grouped thematically, a set of recommendations both of a general nature and specific to FFC and SFFS needs, and finally some proposals for consideration under Phase 3 of the G2G for Rapid Response Catchments project. Some key benefits arising from the project are summarised below. ‱ Evidence has been produced that shows G2G has good skill in providing strategic forecasts for RRCs. The evidence is stratified by catchment type (area, urbanisation, headwater), form of forecast (simulation or forecast mode) and nature of rainfall input (raingauge, deterministic forecast, ensemble forecast). ‱ Strong evidence has been presented on the advantage of using an ensemble rainfall forecast as input to G2G to obtain a probabilistic flood forecast for an RRC, relative to an approach where only a single deterministic rainfall and flood forecast is obtained. This indicates better guidance can be given on forecast flood risk for RRCs, improving the level of service provision for such catchments which are currently not well served. ‱ An improved G2G model configuration, exploiting gauged flows from 912 sites and including new locally calibrated parameters, has been delivered and made operational for the FFC with England & Wales coverage. The benefit is improved operational flood forecast accuracy. For Scotland, an enhanced configuration will be delivered to SFFS in Spring 2014. ‱ Detailed recommendations on how the visual presentation of G2G ensemble results could be improved are set down in this report. When further developed and implemented, these will prove of benefit to the preparation of Flood Guidance Statements issued by FFC and the SFFS across Britain

    The Intentional Use of Service Recovery Strategies to Influence Consumer Emotion, Cognition and Behaviour

    Get PDF
    Service recovery strategies have been identified as a critical factor in the success of. service organizations. This study develops a conceptual frame work to investigate how specific service recovery strategies influence the emotional, cognitive and negative behavioural responses of . consumers., as well as how emotion and cognition influence negative behavior. Understanding the impact of specific service recovery strategies will allow service providers' to more deliberately and intentionally engage in strategies that result in positive organizational outcomes. This study was conducted using a 2 x 2 between-subjects quasi-experimental design. The results suggest that service recovery has a significant impact on emotion, cognition and negative behavior. Similarly, satisfaction, negative emotion and positive emotion all influence negative behavior but distributive justice has no effect

    Hoelder Inequalities and Isospin Splitting of the Quark Scalar Mesons

    Full text link
    A Hoelder inequality analysis of the QCD Laplace sum-rule which probes the non-strange (n\bar n) components of the I={0,1} (light-quark) scalar mesons supports the methodological consistency of an effective continuum contribution from instanton effects. This revised formulation enhances the magnitude of the instanton contributions which split the degeneracy between the I=0 and I=1 channels. Despite this enhanced isospin splitting effect, analysis of the Laplace and finite-energy sum-rules seems to preclude identification of a_0(980) and a light broad sigma-resonance state as the lightest isovector and isoscalar spin-zero nnˉn\bar n mesons. This apparent decoupling of sigma [\equiv f_0(400-1200)] and a_0(980) from the quark n\bar n scalar currents suggests either a non-q \bar q or a dominantly s\bar s interpretation of these resonances, and further suggests the possible identification of the f_0(980) and a_0(1450) as the lightest I={0,1} scalar mesons containing a substantial n\bar n component.Comment: 28 pages, latex2e, 10 embedded eps figues. Analysis extende

    Female chromosome X mosaicism is age-related and preferentially affects the inactivated X chromosome

    Get PDF
    To investigate large structural clonal mosaicism of chromosome X, we analysed the SNP microarray intensity data of 38,303 women from cancer genome-wide association studies (20,878 cases and 17,425 controls) and detected 124 mosaic X events 42 Mb in 97 (0.25%) women. Here we show rates for X-chromosome mosaicism are four times higher than mean autosomal rates; X mosaic events more often include the entire chromosome and participants with X events more likely harbour autosomal mosaic events. X mosaicism frequency increases with age (0.11% in 50-year olds; 0.45% in 75-year olds), as reported for Y and autosomes. Methylation array analyses of 33 women with X mosaicism indicate events preferentially involve the inactive X chromosome. Our results provide further evidence that the sex chromosomes undergo mosaic events more frequently than autosomes, which could have implications for understanding the underlying mechanisms of mosaic events and their possible contribution to risk for chronic diseasesMitchell J. Machiela, Weiyin Zhou, Eric Karlins, Joshua N. Sampson, Neal D. Freedman ... Luis Perez-Jurado ... et al

    Measurement of the View the tt production cross-section using eÎŒ events with b-tagged jets in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This paper describes a measurement of the inclusive top quark pair production cross-section (σttÂŻ) with a data sample of 3.2 fb−1 of proton–proton collisions at a centre-of-mass energy of √s = 13 TeV, collected in 2015 by the ATLAS detector at the LHC. This measurement uses events with an opposite-charge electron–muon pair in the final state. Jets containing b-quarks are tagged using an algorithm based on track impact parameters and reconstructed secondary vertices. The numbers of events with exactly one and exactly two b-tagged jets are counted and used to determine simultaneously σttÂŻ and the efficiency to reconstruct and b-tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section is measured to be: σttÂŻ = 818 ± 8 (stat) ± 27 (syst) ± 19 (lumi) ± 12 (beam) pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the integrated luminosity and the LHC beam energy, giving a total relative uncertainty of 4.4%. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. A fiducial measurement corresponding to the experimental acceptance of the leptons is also presented

    Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at sqrt [ s ] = 13TeV

    Get PDF
    A search for physics beyond the Standard Model, in final states with at least one high transverse momentum charged lepton (electron or muon) and two additional high transverse momentum leptons or jets, is performed using 3.2 fb−1 of proton–proton collision data recorded by the ATLAS detector at the Large Hadron Collider in 2015 at √s = 13 TeV. The upper end of the distribution of the scalar sum of the transverse momenta of leptons and jets is sensitive to the production of high-mass objects. No excess of events beyond Standard Model predictions is observed. Exclusion limits are set for models of microscopic black holes with two to six extra dimensions

    The performance of the jet trigger for the ATLAS detector during 2011 data taking

    Get PDF
    The performance of the jet trigger for the ATLAS detector at the LHC during the 2011 data taking period is described. During 2011 the LHC provided proton–proton collisions with a centre-of-mass energy of 7 TeV and heavy ion collisions with a 2.76 TeV per nucleon–nucleon collision energy. The ATLAS trigger is a three level system designed to reduce the rate of events from the 40 MHz nominal maximum bunch crossing rate to the approximate 400 Hz which can be written to offline storage. The ATLAS jet trigger is the primary means for the online selection of events containing jets. Events are accepted by the trigger if they contain one or more jets above some transverse energy threshold. During 2011 data taking the jet trigger was fully efficient for jets with transverse energy above 25 GeV for triggers seeded randomly at Level 1. For triggers which require a jet to be identified at each of the three trigger levels, full efficiency is reached for offline jets with transverse energy above 60 GeV. Jets reconstructed in the final trigger level and corresponding to offline jets with transverse energy greater than 60 GeV, are reconstructed with a resolution in transverse energy with respect to offline jets, of better than 4 % in the central region and better than 2.5 % in the forward direction
    • 

    corecore