72 research outputs found

    Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Get PDF
    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kgm-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ∼2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5- 25 km), mass fraction of fine ( \u3c 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud

    Ice nucleation and overseeding of ice in volcanic clouds

    Get PDF
    Water is the dominant component of volcanic gas emissions, and water phase transformations, including the formation of ice, can be significant in the dynamics of volcanic clouds. The effectiveness of volcanic ash particles as ice-forming nuclei (IN) is poorly understood and the sparse data that exist for volcanic ash IN have been interpreted in the context of meteorological, rather than volcanic clouds. In this study, single-particle freezing experiments were carried out to investigate the effect of ash particle composition and surface area on water drop freezing temperature. Measured freezing temperatures show only weak correlations with ash IN composition and surface area. Our measurements, together with a review of previous volcanic ash IN measurements, suggest that fine-ash particles (equivalent diameters between approximately 1 and 1000 μm) from the majority of volcanoes will exhibit an onset of freezing between ∼250–260 K. In the context of explosive eruptions where super-micron particles are plentiful, this result implies that volcanic clouds are IN-rich relative to meteorological clouds, which typically are IN-limited, and therefore should exhibit distinct microphysics. We can expect that such “overseeded” volcanic clouds will exhibit enhanced ice crystal concentrations and smaller average ice crystal size, relative to dynamically similar meteorological clouds, and that glaciation will tend to occur over a relatively narrow altitude range

    The size range of bubbles that produce ash during explosive volcanic eruptions

    Get PDF
    Volcanic eruptions can produce ash particles with a range of sizes and morphologies. Here we morphologically distinguish two textural types: Simple (generally smaller) ash particles, where the observable surface displays a single measureable bubble because there is at most one vesicle imprint preserved on each facet of the particle; and complex ash particles, which display multiple vesicle imprints on their surfaces for measurement and may contain complete, unfragmented vesicles in their interiors. Digital elevation models from stereo-scanning electron microscopic images of complex ash particles from the 14 October 1974 sub-Plinian eruption of Volcán Fuego, Guatemala and the 18 May 1980 Plinian eruption of Mount St. Helens, Washington, U.S.A. reveal size distributions of bubbles that burst during magma fragmentation. Results were compared between these two well-characterized eruptions of different explosivities and magma compositions and indicate that bubble size distributions (BSDs) are bimodal, suggesting a minimum of two nucleation events during both eruptions. The larger size mode has a much lower bubble number density (BND) than the smaller size mode, yet these few larger bubbles represent the bulk of the total bubble volume. We infer that the larger bubbles reflect an earlier nucleation event (at depth within the conduit) with subsequent diffusive and decompressive bubble growth and possible coalescence during magma ascent, while the smaller bubbles reflect a relatively later nucleation event occurring closer in time to the point of fragmentation. Bubbles in the Mount St. Helens complex ash particles are generally smaller, but have a total number density roughly one order of magnitude higher, compared to the Fuego samples. Results demonstrate that because ash from explosive eruptions preserves the size of bubbles that nucleated in the magma, grew, and then burst during fragmentation, the analysis of the ash-sized component of tephra can provide insights into the spatial distribution of bubbles in the magma prior to fragmentation, enabling better parameterization of numerical eruption models and improved understanding of ash transport phenomena that result in pyroclastic volcanic hazards. Additionally, the fact that the ash-sized component of tephra preserves BSDs and BNDs consistent with those preserved in larger pyroclasts indicates that these values can be obtained in cases where only distal ash samples from particular eruptions are obtainable

    Hampton Roads Intergovernmental Pilot Project: Memo and Legal Primer

    Full text link
    The Hampton Roads area is experiencing the highest rates of sea-level rise along the U.S. East Coast. It is second only to New Orleans, Louisiana as the largest population center at risk from sea level rise in the country. And it is anticipated that Virginia will experience between 2.3 to 5.2 feet of sea level rise by the end of the century. This unprecedented challenge requires a comprehensive and effective planning response. The mission of the Hampton Roads Sea Level Rise Pilot Project (“Pilot Project”) is to develop a regional whole of government and whole of community approach to sea level rise preparedness and resilience planning for the Hampton Roads community. This is a two-year project with the goal of establishing arrangements and procedures that can effectively coordinate the sea level rise preparedness and resilience planning of federal, state, and local government agencies, citizens groups, and the private sector. Ideally, this Pilot Project will generate a template for use by other regions of the United States also working with similar issues of sea level rise preparedness and this Legal Primer is an important part of this effort. It provides an overview of the myriad legal and policy concerns that the Pilot Project will face in developing practical and whole of government solutions. This abstract has been taken from the authors\u27 executive summary

    The mysteries of mammatus clouds: Observations and formation mechanisms

    Get PDF
    Mammatus clouds are an intriguing enigma of atmospheric fluid dynamics and cloud physics. Most commonly observed on the underside of cumulonimbus anvils, mammatus also occur on the underside of cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus, as well as in contrails from jet aircraft and pyrocumulus ash clouds from volcanic eruptions. Despite their aesthetic appearance, mammatus have been the subject of few quantitative research studies. Observations of mammatus have been obtained largely through serendipitous opportunities with a single observing system (e.g., aircraft penetrations, visual observations, lidar, radar) or tangential observations from field programs with other objectives. Theories describing mammatus remain untested, as adequate measurements for validation do not exist because of the small distance scales and short time scales of mammatus. Modeling studies of mammatus are virtually nonexistent. As a result, relatively little is known about the environment, formation mechanisms, properties, microphysics, and dynamics of mammatus. This paper presents a review of mammatus clouds that addresses these mysteries. Previous observations of mammatus and proposed formation mechanisms are discussed. These hypothesized mechanisms are anvil subsidence, subcloud evaporation/sublimation, melting, hydrometeor fallout, cloud-base detrainment instability, radiative effects, gravity waves, Kelvin-Helmholtz instability, Rayleigh-Taylor instability, and Rayleigh-Bénard-like convection. Other issues addressed in this paper include whether mammatus are composed of ice or liquid water hydrometeors, why mammatus are smooth, what controls the temporal and spatial scales and organization of individual mammatus lobes, and what are the properties of volcanic ash clouds that produce mammatus? The similarities and differences between mammatus, virga, stalactites, and reticular clouds are also discussed. Finally, because much still remains to be learned, research opportunities are described for using mammatus as a window into the microphysical, turbulent, and dynamical processes occurring on the underside of clouds. © 2006 American Meteorological Society

    Circulating microRNAs in sera correlate with soluble biomarkers of immune activation but do not predict mortality in ART treated individuals with HIV-1 infection: A case control study

    Get PDF
    Introduction: The use of anti-retroviral therapy (ART) has dramatically reduced HIV-1 associated morbidity and mortality. However, HIV-1 infected individuals have increased rates of morbidity and mortality compared to the non-HIV-1 infected population and this appears to be related to end-organ diseases collectively referred to as Serious Non-AIDS Events (SNAEs). Circulating miRNAs are reported as promising biomarkers for a number of human disease conditions including those that constitute SNAEs. Our study sought to investigate the potential of selected miRNAs in predicting mortality in HIV-1 infected ART treated individuals. Materials and Methods: A set of miRNAs was chosen based on published associations with human disease conditions that constitute SNAEs. This case: control study compared 126 cases (individuals who died whilst on therapy), and 247 matched controls (individuals who remained alive). Cases and controls were ART treated participants of two pivotal HIV-1 trials. The relative abundance of each miRNA in serum was measured, by RTqPCR. Associations with mortality (all-cause, cardiovascular and malignancy) were assessed by logistic regression analysis. Correlations between miRNAs and CD4+ T cell count, hs-CRP, IL-6 and D-dimer were also assessed. Results: None of the selected miRNAs was associated with all-cause, cardiovascular or malignancy mortality. The levels of three miRNAs (miRs -21, -122 and -200a) correlated with IL-6 while miR-21 also correlated with D-dimer. Additionally, the abundance of miRs -31, -150 and -223, correlated with baseline CD4+ T cell count while the same three miRNAs plus miR- 145 correlated with nadir CD4+ T cell count. Discussion: No associations with mortality were found with any circulating miRNA studied. These results cast doubt onto the effectiveness of circulating miRNA as early predictors of mortality or the major underlying diseases that contribute to mortality in participants treated for HIV-1 infection

    Global patient outcomes after elective surgery: prospective cohort study in 27 low-, middle- and high-income countries.

    Get PDF
    BACKGROUND: As global initiatives increase patient access to surgical treatments, there remains a need to understand the adverse effects of surgery and define appropriate levels of perioperative care. METHODS: We designed a prospective international 7-day cohort study of outcomes following elective adult inpatient surgery in 27 countries. The primary outcome was in-hospital complications. Secondary outcomes were death following a complication (failure to rescue) and death in hospital. Process measures were admission to critical care immediately after surgery or to treat a complication and duration of hospital stay. A single definition of critical care was used for all countries. RESULTS: A total of 474 hospitals in 19 high-, 7 middle- and 1 low-income country were included in the primary analysis. Data included 44 814 patients with a median hospital stay of 4 (range 2-7) days. A total of 7508 patients (16.8%) developed one or more postoperative complication and 207 died (0.5%). The overall mortality among patients who developed complications was 2.8%. Mortality following complications ranged from 2.4% for pulmonary embolism to 43.9% for cardiac arrest. A total of 4360 (9.7%) patients were admitted to a critical care unit as routine immediately after surgery, of whom 2198 (50.4%) developed a complication, with 105 (2.4%) deaths. A total of 1233 patients (16.4%) were admitted to a critical care unit to treat complications, with 119 (9.7%) deaths. Despite lower baseline risk, outcomes were similar in low- and middle-income compared with high-income countries. CONCLUSIONS: Poor patient outcomes are common after inpatient surgery. Global initiatives to increase access to surgical treatments should also address the need for safe perioperative care. STUDY REGISTRATION: ISRCTN5181700

    Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

    Get PDF
    The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe

    Effectiveness of a national quality improvement programme to improve survival after emergency abdominal surgery (EPOCH): a stepped-wedge cluster-randomised trial

    Get PDF
    Background: Emergency abdominal surgery is associated with poor patient outcomes. We studied the effectiveness of a national quality improvement (QI) programme to implement a care pathway to improve survival for these patients. Methods: We did a stepped-wedge cluster-randomised trial of patients aged 40 years or older undergoing emergency open major abdominal surgery. Eligible UK National Health Service (NHS) hospitals (those that had an emergency general surgical service, a substantial volume of emergency abdominal surgery cases, and contributed data to the National Emergency Laparotomy Audit) were organised into 15 geographical clusters and commenced the QI programme in a random order, based on a computer-generated random sequence, over an 85-week period with one geographical cluster commencing the intervention every 5 weeks from the second to the 16th time period. Patients were masked to the study group, but it was not possible to mask hospital staff or investigators. The primary outcome measure was mortality within 90 days of surgery. Analyses were done on an intention-to-treat basis. This study is registered with the ISRCTN registry, number ISRCTN80682973. Findings: Treatment took place between March 3, 2014, and Oct 19, 2015. 22 754 patients were assessed for elegibility. Of 15 873 eligible patients from 93 NHS hospitals, primary outcome data were analysed for 8482 patients in the usual care group and 7374 in the QI group. Eight patients in the usual care group and nine patients in the QI group were not included in the analysis because of missing primary outcome data. The primary outcome of 90-day mortality occurred in 1210 (16%) patients in the QI group compared with 1393 (16%) patients in the usual care group (HR 1·11, 0·96–1·28). Interpretation: No survival benefit was observed from this QI programme to implement a care pathway for patients undergoing emergency abdominal surgery. Future QI programmes should ensure that teams have both the time and resources needed to improve patient care. Funding: National Institute for Health Research Health Services and Delivery Research Programme
    corecore