1,037 research outputs found

    Appropriate model use for predicting elevations and inundation extent for extreme flood events

    Get PDF
    Flood risk assessment is generally studied using flood simulation models; however, flood risk managers often simplify the computational process; this is called a “simplification strategy”. This study investigates the appropriateness of the “simplification strategy” when used as a flood risk assessment tool for areas prone to flash flooding. The 2004 Boscastle, UK, flash flood was selected as a case study. Three different model structures were considered in this study, including: (1) a shock-capturing model, (2) a regular ADI-type flood model and (3) a diffusion wave model, i.e. a zero-inertia approach. The key findings from this paper strongly suggest that applying the “simplification strategy” is only appropriate for flood simulations with a mild slope and over relatively smooth terrains, whereas in areas susceptible to flash flooding (i.e. steep catchments), following this strategy can lead to significantly erroneous predictions of the main parameters—particularly the peak water levels and the inundation extent. For flood risk assessment of urban areas, where the emergence of flash flooding is possible, it is shown to be necessary to incorporate shock-capturing algorithms in the solution procedure, since these algorithms prevent the formation of spurious oscillations and provide a more realistic simulation of the flood levels

    Diagnosis of obstructive coronary artery disease using computed tomography angiography in patients with stable chest pain depending on clinical probability and in clinically important subgroups: meta-analysis of individual patient data

    Get PDF
    OBJECTIVE: To determine whether coronary computed tomography angiography (CTA) should be performed in patients with any clinical probability of coronary artery disease (CAD), and whether the diagnostic performance differs between subgroups of patients. DESIGN: Prospectively designed meta-analysis of individual patient data from prospective diagnostic accuracy studies. DATA SOURCES: Medline, Embase, and Web of Science for published studies. Unpublished studies were identified via direct contact with participating investigators. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: Prospective diagnostic accuracy studies that compared coronary CTA with coronary angiography as the reference standard, using at least a 50% diameter reduction as a cutoff value for obstructive CAD. All patients needed to have a clinical indication for coronary angiography due to suspected CAD, and both tests had to be performed in all patients. Results had to be provided using 2×2 or 3×2 cross tabulations for the comparison of CTA with coronary angiography. Primary outcomes were the positive and negative predictive values of CTA as a function of clinical pretest probability of obstructive CAD, analysed by a generalised linear mixed model; calculations were performed including and excluding non-diagnostic CTA results. The no-treat/treat threshold model was used to determine the range of appropriate pretest probabilities for CTA. The threshold model was based on obtained post-test probabilities of less than 15% in case of negative CTA and above 50% in case of positive CTA. Sex, angina pectoris type, age, and number of computed tomography detector rows were used as clinical variables to analyse the diagnostic performance in relevant subgroups. RESULTS: Individual patient data from 5332 patients from 65 prospective diagnostic accuracy studies were retrieved. For a pretest probability range of 7-67%, the treat threshold of more than 50% and the no-treat threshold of less than 15% post-test probability were obtained using CTA. At a pretest probability of 7%, the positive predictive value of CTA was 50.9% (95% confidence interval 43.3% to 57.7%) and the negative predictive value of CTA was 97.8% (96.4% to 98.7%); corresponding values at a pretest probability of 67% were 82.7% (78.3% to 86.2%) and 85.0% (80.2% to 88.9%), respectively. The overall sensitivity of CTA was 95.2% (92.6% to 96.9%) and the specificity was 79.2% (74.9% to 82.9%). CTA using more than 64 detector rows was associated with a higher empirical sensitivity than CTA using up to 64 rows (93.4% v 86.5%, P=0.002) and specificity (84.4% v 72.6%, P<0.001). The area under the receiver-operating-characteristic curve for CTA was 0.897 (0.889 to 0.906), and the diagnostic performance of CTA was slightly lower in women than in with men (area under the curve 0.874 (0.858 to 0.890) v 0.907 (0.897 to 0.916), P<0.001). The diagnostic performance of CTA was slightly lower in patients older than 75 (0.864 (0.834 to 0.894), P=0.018 v all other age groups) and was not significantly influenced by angina pectoris type (typical angina 0.895 (0.873 to 0.917), atypical angina 0.898 (0.884 to 0.913), non-anginal chest pain 0.884 (0.870 to 0.899), other chest discomfort 0.915 (0.897 to 0.934)). CONCLUSIONS: In a no-treat/treat threshold model, the diagnosis of obstructive CAD using coronary CTA in patients with stable chest pain was most accurate when the clinical pretest probability was between 7% and 67%. Performance of CTA was not influenced by the angina pectoris type and was slightly higher in men and lower in older patients. SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42012002780

    The Hubbard model within the equations of motion approach

    Full text link
    The Hubbard model has a special role in Condensed Matter Theory as it is considered as the simplest Hamiltonian model one can write in order to describe anomalous physical properties of some class of real materials. Unfortunately, this model is not exactly solved except for some limits and therefore one should resort to analytical methods, like the Equations of Motion Approach, or to numerical techniques in order to attain a description of its relevant features in the whole range of physical parameters (interaction, filling and temperature). In this manuscript, the Composite Operator Method, which exploits the above mentioned analytical technique, is presented and systematically applied in order to get information about the behavior of all relevant properties of the model (local, thermodynamic, single- and two- particle ones) in comparison with many other analytical techniques, the above cited known limits and numerical simulations. Within this approach, the Hubbard model is shown to be also capable to describe some anomalous behaviors of the cuprate superconductors.Comment: 232 pages, more than 300 figures, more than 500 reference

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    Designing the climate observing system of the future

    Get PDF
    © The Author(s), 2018. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Earth's Future 6 (2018): 80–102, doi:10.1002/2017EF000627.Climate observations are needed to address a large range of important societal issues including sea level rise, droughts, floods, extreme heat events, food security, and freshwater availability in the coming decades. Past, targeted investments in specific climate questions have resulted in tremendous improvements in issues important to human health, security, and infrastructure. However, the current climate observing system was not planned in a comprehensive, focused manner required to adequately address the full range of climate needs. A potential approach to planning the observing system of the future is presented in this article. First, this article proposes that priority be given to the most critical needs as identified within the World Climate Research Program as Grand Challenges. These currently include seven important topics: melting ice and global consequences; clouds, circulation and climate sensitivity; carbon feedbacks in the climate system; understanding and predicting weather and climate extremes; water for the food baskets of the world; regional sea-level change and coastal impacts; and near-term climate prediction. For each Grand Challenge, observations are needed for long-term monitoring, process studies and forecasting capabilities. Second, objective evaluations of proposed observing systems, including satellites, ground-based and in situ observations as well as potentially new, unidentified observational approaches, can quantify the ability to address these climate priorities. And third, investments in effective climate observations will be economically important as they will offer a magnified return on investment that justifies a far greater development of observations to serve society's needs

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Parent-of-origin-specific allelic associations among 106 genomic loci for age at menarche.

    Get PDF
    Age at menarche is a marker of timing of puberty in females. It varies widely between individuals, is a heritable trait and is associated with risks for obesity, type 2 diabetes, cardiovascular disease, breast cancer and all-cause mortality. Studies of rare human disorders of puberty and animal models point to a complex hypothalamic-pituitary-hormonal regulation, but the mechanisms that determine pubertal timing and underlie its links to disease risk remain unclear. Here, using genome-wide and custom-genotyping arrays in up to 182,416 women of European descent from 57 studies, we found robust evidence (P < 5 × 10(-8)) for 123 signals at 106 genomic loci associated with age at menarche. Many loci were associated with other pubertal traits in both sexes, and there was substantial overlap with genes implicated in body mass index and various diseases, including rare disorders of puberty. Menarche signals were enriched in imprinted regions, with three loci (DLK1-WDR25, MKRN3-MAGEL2 and KCNK9) demonstrating parent-of-origin-specific associations concordant with known parental expression patterns. Pathway analyses implicated nuclear hormone receptors, particularly retinoic acid and γ-aminobutyric acid-B2 receptor signalling, among novel mechanisms that regulate pubertal timing in humans. Our findings suggest a genetic architecture involving at least hundreds of common variants in the coordinated timing of the pubertal transition
    corecore