100 research outputs found

    Diesel fuel and Diesel fuel with Water Emulsions Spray and Combustion Characterization

    Get PDF
    The legislative demand to simultaneously reduce nitrogen oxide(s) emissions and particulate matter emissions from compression ignition engines is proving difficult to achieve in the real world. One promising strategy is the use of Diesel fuel emulsified with water. There is little work concerning the e↵ect of emulsification on fuel injection sprays. This work details an experimental campaign to characterize non-vaporizing sprays of Diesel fuel and Diesel emulsions, with 10% and 20% water by mass. Characterization of the fuel sprays has been done using: high speed photography, applying focused shadowgraphy and a di↵used back-lighting technique and; hydraulically using a force transducer placed 0.5mm from the injector nozzle to measure spray momentum flux. All measurements have been made in an optically accessible high pressure chamber filled with nitrogen, resulting in an ambient gas density of 22.6 kg/m3 and 34.5 kg/m3, with injection pressures of 500, 700 and 1000bar used. The images collected have been used to determine the spray cone angle, the tip penetration and the tip velocity. The signal from the force transducer has been used to determine spray momentum flux, instantaneous mass flow rate, dimensionless nozzle coefficients and injection velocity. The injection pressure had no discernible influence on the spray cone angle for the Diesel fuel sprays but did for the emulsified fuels. Increasing the ambient density resulted in an increase in the spray cone angle for Diesel fuel, this was not always the case for the emulsified fuel sprays. The spray tip emerged from the nozzle and accelerated for a very short period after the start of injection until a maximum velocity was reached. The momentum flux for each fuel was almost the same for corresponding conditions. Increasing the chamber gas density reduced the measured spray momentum. The total mass of fuel injected for Diesel fuel was larger than for the emulsions for the equivalent conditions and duration, although emulsions had a larger density and viscosity. The emulsions had a higher injection velocity. The nozzle discharge coefficient for Diesel was higher than for the emulsions. The velocities measured hydraulically are much higher than the maximum tip velocities measured optically. The study has been completed by some preliminary combustion studies of the fuels in an optically accessible combustion chamber. The emulsion tested exhibited much lower natural flame luminosity, used to determine the spatially integrated natural luminosity of the flame which may be useful as a soot indicator. There was no evidence that the microexplosion phenomenon was present in these tests.Advanced Engine Research (A.E.R, Basildon, Essex, UK

    2022 Joint ESC/EACTS review of the 2018 guideline recommendations on the revascularization of left main coronary artery disease in patients at low surgical risk and anatomy suitable for PCI or CABG

    Get PDF
    Task Force structure and summary of clinical evidence of 2022 ESC/EACTS review of the 2018 guideline recommendations on the revascularization of left main coronary artery disease. CABG, coronary artery bypass grafting; PCI, percutaneous coronary intervention; LM, left main; SYNTAX, Synergy Between Percutaneous Coronary Intervention with TAXUS and Cardiac Surgery. a'Event' refers to the composite of death, myocardial infarction (according to Universal Definition of Myocardial Infarction if available, otherwise protocol defined) or stroke. In October 2021, the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS) jointly agreed to establish a Task Force (TF) to review recommendations of the 2018 ESC/EACTS Guidelines on myocardial revascularization as they apply to patients with left main (LM) disease with low-to-intermediate SYNTAX score (0-32). This followed the withdrawal of support by the EACTS in 2019 for the recommendations about the management of LM disease of the previous guideline. The TF was asked to review all new relevant data since the 2018 guidelines including updated aggregated data from the four randomized trials comparing percutaneous coronary intervention (PCI) with drug-eluting stents vs. coronary artery bypass grafting (CABG) in patients with LM disease. This document represents a summary of the work of the TF; suggested updated recommendations for the choice of revascularization modality in patients undergoing myocardial revascularization for LM disease are included. In stable patients with an indication for revascularization for LM disease, with coronary anatomy suitable for both procedures and a low predicted surgical mortality, the TF concludes that both treatment options are clinically reasonable based on patient preference, available expertise, and local operator volumes. The suggested recommendations for revascularization with CABG are Class I, Level of Evidence A. The recommendations for PCI are Class IIa, Level of Evidence A. The TF recognized several important gaps in knowledge related to revascularization in patients with LM disease and recognizes that aggregated data from the four randomized trials were still only large enough to exclude large differences in mortality.</p

    Effects of statin therapy on diagnoses of new-onset diabetes and worsening glycaemia in large-scale randomised blinded statin trials: an individual participant data meta-analysis

    Get PDF
    Background Previous meta-analyses of summary data from randomised controlled trials have shown that statin therapy increases the risk of diabetes, but less is known about the size or timing of this effect, or who is at greatest risk. We aimed to address these gaps in knowledge through analysis of individual participant data from large, long-term, randomised, double-blind trials of statin therapy. Methods We conducted a meta-analysis of individual participant data from randomised controlled trials of statin therapy that participated in the CTT Collaboration. All double-blind randomised controlled trials of statin therapy of at least 2 years’ scheduled duration and with at least 1000 participants were eligible for inclusion in this meta-analysis. All recorded diabetes-related adverse events, treatments, and measures of glycaemia were sought from eligible trials. Meta-analyses assessed the effects of allocation to statin therapy on new-onset diabetes (defined by diabetes-related adverse events, use of new glucose-lowering medications, glucose concentrations, or HbA1c values) and on worsening glycaemia in people with diabetes (defined by complications of glucose control, increased use of glucose-lowering medication, or HbA1c increase of ≥0·5%). Standard inverse-variance-weighted meta-analyses of the effects on these outcomes were conducted according to a prespecified protocol. Findings Of the trials participating in the CTT Collaboration, 19 trials compared statin versus placebo (123 940 participants, 25 701 [21%] with diabetes; median follow-up of 4·3 years), and four trials compared more versus less intensive statin therapy (30 724 participants, 5340 [17%] with diabetes, median follow-up of 4·9 years). Compared with placebo, allocation to low-intensity or moderate-intensity statin therapy resulted in a 10% proportional increase in new-onset diabetes (2420 of 39 179 participants assigned to receive a statin [1·3% per year] vs 2214 of 39 266 participants assigned to receive placebo [1·2% per year]; rate ratio [RR] 1·10, 95% CI 1·04–1·16), and allocation to high-intensity statin therapy resulted in a 36% proportional increase (1221 of 9935 participants assigned to receive a statin [4·8% per year] vs 905 of 9859 participants assigned to receive placebo [3·5% per year]; 1·36, 1·25–1·48). For each trial, the rate of new-onset diabetes among participants allocated to receive placebo depended mostly on the proportion of participants who had at least one follow-up HbA1c measurement; this proportion was much higher in the high-intensity than the low-intensity or moderate-intensity trials. Consequently, the main determinant of the magnitude of the absolute excesses in the two types of trial was the extent of HbA1c measurement rather than the proportional increase in risk associated with statin therapy. In participants without baseline diabetes, mean glucose increased by 0·04 mmol/L with both low-intensity or moderate-intensity (95% CI 0·03–0·05) and high-intensity statins (0·02–0·06), and mean HbA1c increased by 0·06% (0·00–0·12) with low-intensity or moderate-intensity statins and 0·08% (0·07–0·09) with high-intensity statins. Among those with a baseline measure of glycaemia, approximately 62% of new-onset diabetes cases were among participants who were already in the top quarter of the baseline distribution. The relative effects of statin therapy on new-onset diabetes were similar among different types of participants and over time. Among participants with baseline diabetes, the RRs for worsening glycaemia were 1·10 (1·06–1·14) for low-intensity or moderate-intensity statin therapy and 1·24 (1·06–1·44) for high-intensity statin therapy compared with placebo. Interpretation Statins cause a moderate dose-dependent increase in new diagnoses of diabetes that is consistent with a small upwards shift in glycaemia, with the majority of new diagnoses of diabetes occurring in people with baseline glycaemic markers that are close to the diagnostic threshold for diabetes. Importantly, however, any theoretical adverse effects of statins on cardiovascular risk that might arise from these small increases in glycaemia (or, indeed, from any other mechanism) are already accounted for in the overall reduction in cardiovascular risk that is seen with statin therapy in these trials. These findings should further inform clinical guidelines regarding clinical management of people taking statin therapy. Funding British Heart Foundation, UK Medical Research Council, and Australian National Health and Medical Research Council

    Earth observations into action: the systemic integration of earth observation applications into national risk reduction decision structures

    Get PDF
    Purpose - As stated in the United Nations Global Assessment Report 2022 Concept Note, decision-makers everywhere need data and statistics that are accurate, timely, sufficiently disaggregated, relevant, accessible and easy to use. The purpose of this paper is to demonstrate scalable and replicable methods to advance and integrate the use of earth observation (EO), specifically ongoing efforts within the Group on Earth Observations (GEO) Work Programme and the Committee on Earth Observation Satellites (CEOS) Work Plan, to support risk-informed decision-making, based on documented national and subnational needs and requirements. Design/methodology/approach - Promotion of open data sharing and geospatial technology solutions at national and subnational scales encourages the accelerated implementation of successful EO applications. These solutions may also be linked to specific Sendai Framework for Disaster Risk Reduction (DRR) 2015–2030 Global Targets that provide trusted answers to risk-oriented decision frameworks, as well as critical synergies between the Sendai Framework and the 2030 Agenda for Sustainable Development. This paper provides examples of these efforts in the form of platforms and knowledge hubs that leverage latest developments in analysis ready data and support evidence-based DRR measures. Findings - The climate crisis is forcing countries to face unprecedented frequency and severity of disasters. At the same time, there are growing demands to respond to policy at the national and international level. EOs offer insights and intelligence for evidence-based policy development and decision-making to support key aspects of the Sendai Framework. The GEO DRR Working Group and CEOS Working Group Disasters are ideally placed to help national government agencies, particularly national Sendai focal points to learn more about EOs and understand their role in supporting DRR. Originality/value - The unique perspective of EOs provide unrealized value to decision-makers addressing DRR. This paper highlights tangible methods and practices that leverage free and open source EO insights that can benefit all DRR practitioners

    Review article: Natural hazard risk assessments at the global scale

    Get PDF
    Since 1990, natural hazards have led to over 1.6 million fatalities globally, and economic losses are estimated at an average of around $260–310 billion per year. The scientific and policy community recognise the need to reduce these risks. As a result, the last decade has seen a rapid development of global models for assessing risk from natural hazards at the global scale. In this paper, we review the scientific literature on natural hazard risk assessments at the global scale, and specifically examine whether and how they have examined future projections of hazard, exposure, and/or vulnerability. In doing so, we examine similarities and differences between the approaches taken across the different hazards, and identify potential ways in which different hazard communities can learn from each other. For example, we show that global risk studies focusing on hydrological, climatological, and meteorological hazards, have included future projections and disaster risk reduction measures (in the case of floods), whilst these are missing in global studies related to geological hazards. The methods used for projecting future exposure in the former could be applied to the geological studies. On the other hand, studies of earthquake and tsunami risk are now using stochastic modelling approaches to allow for a fully probabilistic assessment of risk, which could benefit the modelling of risk from other hazards. Finally, we discuss opportunities for learning from methods and approaches being developed and applied to assess natural hazard risks at more continental or regional scales. Through this paper, we hope to encourage dialogue on knowledge sharing between scientists and communities working on different hazards and at different spatial scales

    Natural hazard risk assessments at the global scale

    Get PDF
    Since 1990, natural hazards have led to over 1.6 million fatalities globally, and economic losses are estimated at an average of around USD 260–310 billion per year. The scientific and policy communities recognise the need to reduce these risks. As a result, the last decade has seen a rapid development of global models for assessing risk from natural hazards at the global scale. In this paper, we review the scientific literature on natural hazard risk assessments at the global scale, and we specifically examine whether and how they have examined future projections of hazard, exposure, and/or vulnerability. In doing so, we examine similarities and differences between the approaches taken across the different hazards, and we identify potential ways in which different hazard communities can learn from each other. For example, there are a number of global risk studies focusing on hydrological, climatological, and meteorological hazards that have included future projections and disaster risk reduction measures (in the case of floods), whereas fewer exist in the peer-reviewed literature for global studies related to geological hazards. On the other hand, studies of earthquake and tsunami risk are now using stochastic modelling approaches to allow for a fully probabilistic assessment of risk, which could benefit the modelling of risk from other hazards. Finally, we discuss opportunities for learning from methods and approaches being developed and applied to assess natural hazard risks at more continental or regional scales. Through this paper, we hope to encourage further dialogue on knowledge sharing between disciplines and communities working on different hazards and risk and at different spatial scales

    Experiences of the Data Monitoring Committee for the RECOVERY trial, a large-scale adaptive platform randomised trial of treatments for patients hospitalised with COVID-19

    Get PDF
    Abstract: Aim: To inform the oversight of future clinical trials during a pandemic, we summarise the experiences of the Data Monitoring Committee (DMC) for the Randomised Evaluation of COVID therapy trial (RECOVERY), a large-scale randomised adaptive platform clinical trial of treatments for hospitalised patients with COVID-19. Methods and findings: During the first 24 months of the trial (March 2020 to February 2022), the DMC oversaw accumulating data for 14 treatments in adults (plus 10 in children) involving > 45,000 randomised patients. Five trial aspects key for the DMC in performing its role were: a large committee of members, including some with extensive DMC experience and others who had broad clinical expertise; clear strategic planning, communication, and responsiveness by the trial principal investigators; data collection and analysis systems able to cope with phases of very rapid recruitment and link to electronic health records; an ability to work constructively with regulators (and other DMCs) to address emerging concerns without the need to release unblinded mortality results; and the use of videoconferencing systems that enabled national and international members to meet at short notice and from home during the pandemic when physical meetings were impossible. Challenges included that the first four treatments introduced were effectively ‘competing’ for patients (increasing pressure to make rapid decisions on each one); balancing the global health imperative to report on findings with the need to maintain confidentiality until the results were sufficiently certain to appropriately inform treatment decisions; and reliably assessing safety, especially for newer agents introduced after the initial wave and in the small numbers of pregnant women and children included. We present a series of case vignettes to illustrate some of the issues and the DMC decision-making related to hydroxychloroquine, dexamethasone, casirivimab + imdevimab, and tocilizumab. Conclusions: RECOVERY’s streamlined adaptive platform design, linked to hospital-level and population-level health data, enabled the rapid and reliable assessment of multiple treatments for hospitalised patients with COVID-19. The later introduction of factorial assessments increased the trial’s efficiency, without compromising the DMC’s ability to assess safety and efficacy. Requests for the release of unblinded primary outcome data to regulators at points when data were not mature required significant efforts in communication with the regulators by the DMC to avoid inappropriate early trial termination

    Effects of antiplatelet therapy on stroke risk by brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases: subgroup analyses of the RESTART randomised, open-label trial

    Get PDF
    Background Findings from the RESTART trial suggest that starting antiplatelet therapy might reduce the risk of recurrent symptomatic intracerebral haemorrhage compared with avoiding antiplatelet therapy. Brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases (such as cerebral microbleeds) are associated with greater risks of recurrent intracerebral haemorrhage. We did subgroup analyses of the RESTART trial to explore whether these brain imaging features modify the effects of antiplatelet therapy
    • …
    corecore