182 research outputs found

    Characterisation and airborne deployment of a new counterflow virtual impactor inlet

    Get PDF
    A new counterflow virtual impactor (CVI) inlet is introduced with details of its design, laboratory characterisation tests and deployment on an aircraft during the 2011 Eastern Pacific Emitted Aerosol Cloud Experiment (E-PEACE). The CVI inlet addresses three key issues in previous designs; in particular, the inlet operates with: (i) negligible organic contamination; (ii) a significant sample flow rate to downstream instruments (∼15 l min^(−1)) that reduces the need for dilution; and (iii) a high level of accessibility to the probe interior for cleaning. Wind tunnel experiments characterised the cut size of sampled droplets and the particle size-dependent transmission efficiency in various parts of the probe. For a range of counter-flow rates and air velocities, the measured cut size was between 8.7–13.1 μm. The mean percentage error between cut size measurements and predictions from aerodynamic drag theory is 1.7%. The CVI was deployed on the Center for Interdisciplinary Remotely Piloted Aircraft Studies (CIRPAS) Twin Otter for thirty flights during E-PEACE to study aerosol-cloud-radiation interactions off the central coast of California in July and August 2011. Results are reported to assess the performance of the inlet including comparisons of particle number concentration downstream of the CVI and cloud drop number concentration measured by two independent aircraft probes. Measurements downstream of the CVI are also examined from one representative case flight coordinated with shipboard-emitted smoke that was intercepted in cloud by the Twin Otter

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Evaluating WRF-GC v2.0 predictions of boundary layer height and vertical ozone profile during the 2021 TRACER-AQ campaign in Houston, Texas

    Get PDF
    The TRacking Aerosol Convection ExpeRiment – Air Quality (TRACER-AQ) campaign probed Houston air quality with a comprehensive suite of ground-based and airborne remote sensing measurements during the intensive operating period in September 2021. Two post-frontal high-ozone episodes (6–11 and 23–26 September) were recorded during the aforementioned period. In this study, we evaluated the simulation of the planetary boundary layer (PBL) height and the vertical ozone profile by a high-resolution (1.33 km) 3-D photochemical model, the Weather Research and Forecasting (WRF)-driven GEOS-Chem (WRF-GC). We evaluated the PBL heights with a ceilometer at the coastal site La Porte and the airborne High Spectral Resolution Lidar 2 (HSRL-2) flying over urban Houston and adjacent waters. Compared with the ceilometer at La Porte, the model captures the diurnal variations in the PBL heights with a very strong temporal correlation (R&gt;0.7) and ±20 % biases. Compared with the airborne HSRL-2, the model exhibits a moderate to strong spatial correlation (R=0.26–0.68), with ±20 % biases during the noon and afternoon hours during ozone episodes. For land–water differences in PBL heights, the water has shallower PBL heights compared to land. The model predicts larger land–water differences than the observations because the model consistently underestimates the PBL heights over land compared to water. We evaluated vertical ozone distributions by comparing the model against vertical measurements from the TROPospheric OZone lidar (TROPOZ), the HSRL-2, and ozonesondes, as well as surface measurements at La Porte from a model 49i ozone analyzer and one Continuous Ambient Monitoring Station (CAMS). The model underestimates free-tropospheric ozone (2–3 km aloft) by 9 %–22 % but overestimates near-ground ozone (&lt;50 m aloft) by 6 %-39 % during the two ozone episodes. Boundary layer ozone (0.5–1 km aloft) is underestimated by 1 %–11 % during 8–11 September but overestimated by 0 %–7 % during 23–26 September. Based on these evaluations, we identified two model limitations, namely the single-layer PBL representation and the free-tropospheric ozone underestimation. These limitations have implications for the predictivity of ozone's vertical mixing and distribution in other models.</p

    Reconciling Assumptions in Bottom-Up and Top-Down Approaches for Estimating Aerosol Emission Rates From Wildland Fires Using Observations From FIREX-AQ

    Get PDF
    Accurate fire emissions inventories are crucial to predict the impacts of wildland fires on air quality and atmospheric composition. Two traditional approaches are widely used to calculate fire emissions: a satellite-based top-down approach and a fuels-based bottom-up approach. However, these methods often considerably disagree on the amount of particulate mass emitted from fires. Previously available observational datasets tended to be sparse, and lacked the statistics needed to resolve these methodological discrepancies. Here, we leverage the extensive and comprehensive airborne in situ and remote sensing measurements of smoke plumes from the recent Fire Influence on Regional to Global Environments and Air Quality (FIREX-AQ) campaign to statistically assess the skill of the two traditional approaches. We use detailed campaign observations to calculate and compare emission rates at an exceptionally high-resolution using three separate approaches: top-down, bottom-up, and a novel approach based entirely on integrated airborne in situ measurements. We then compute the daily average of these high-resolution estimates and compare with estimates from lower resolution, global top-down and bottom-up inventories. We uncover strong, linear relationships between all of the high-resolution emission rate estimates in aggregate, however no single approach is capable of capturing the emission characteristics of every fire. Global inventory emission rate estimates exhibited weaker correlations with the high-resolution approaches and displayed evidence of systematic bias. The disparity between the low-resolution global inventories and the high-resolution approaches is likely caused by high levels of uncertainty in essential variables used in bottom-up inventories and imperfect assumptions in top-down inventories

    Atmospheric oxidation in the presence of clouds during the Deep Convective Clouds and Chemistry (DC3) study

    Get PDF
    Deep convective clouds are critically important to the distribution of atmospheric constituents throughout the troposphere but are difficult environments to study. The Deep Convective Clouds and Chemistry (DC3) study in 2012 provided the environment, platforms, and instrumentation to test oxidation chemistry around deep convective clouds and their impacts downwind. Measurements on the NASA DC-8 aircraft included those of the radicals hydroxyl (OH) and hydroperoxyl (HO2), OH reactivity, and more than 100 other chemical species and atmospheric properties. OH, HO2, and OH reactivity were compared to photochemical models, some with and some without simplified heterogeneous chemistry, to test the understanding of atmospheric oxidation as encoded in the model. In general, the agreement between the observed and modeled OH, HO2, and OH reactivity was within the combined uncertainties for the model without heterogeneous chemistry and the model including heterogeneous chemistry with small OH and HO2 uptake consistent with laboratory studies. This agreement is generally independent of the altitude, ozone photolysis rate, nitric oxide and ozone abundances, modeled OH reactivity, and aerosol and ice surface area. For a sunrise to midday flight downwind of a nighttime mesoscale convective system, the observed ozone increase is consistent with the calculated ozone production rate. Even with some observed-to-modeled discrepancies, these results provide evidence that a current measurement-constrained photochemical model can simulate observed atmospheric oxidation processes to within combined uncertainties, even around convective clouds. For this DC3 study, reduction in the combined uncertainties would be needed to confidently unmask errors or omissions in the model chemical mechanism.</p

    Simple Nudges for Better Password Creation

    Get PDF
    Recent security breaches have highlighted the consequences of reusing passwords across online accounts. Recent guidance on password policies by the UK government recommend an emphasis on password length over an extended character set for generating secure but memorable passwords without cognitive overload. This paper explores the role of three nudges in creating website-specific passwords: financial incentive (present vs absent), length instruction (long password vs no instruction) and stimulus (picture present vs not present). Mechanical Turk workers were asked to create a password in one of these conditions and the resulting passwords were evaluated based on character length, resistance to automated guessing attacks, and time taken to create the password. We found that users created longer passwords when asked to do so or when given a financial incentive and these longer passwords were harder to guess than passwords created with no instruction. Using a picture nudge to support password creation did not lead to passwords that were either longer or more resistant to attacks but did lead to account-specific passwords

    The Enterovirus 71 A-particle Forms a Gateway to Allow Genome Release: A CryoEM Study of Picornavirus Uncoating

    Get PDF
    Since its discovery in 1969, enterovirus 71 (EV71) has emerged as a serious worldwide health threat. This human pathogen of the picornavirus family causes hand, foot, and mouth disease, and also has the capacity to invade the central nervous system to cause severe disease and death. Upon binding to a host receptor on the cell surface, the virus begins a two-step uncoating process, first forming an expanded, altered "A-particle", which is primed for genome release. In a second step after endocytosis, an unknown trigger leads to RNA expulsion, generating an intact, empty capsid. Cryo-electron microscopy reconstructions of these two capsid states provide insight into the mechanics of genome release. The EV71 A-particle capsid interacts with the genome near the icosahedral two-fold axis of symmetry, which opens to the external environment via a channel ~10 Å in diameter that is lined with patches of negatively charged residues. After the EV71 genome has been released, the two-fold channel shrinks, though the overall capsid dimensions are conserved. These structural characteristics identify the two-fold channel as the site where a gateway forms and regulates the process of genome release. © 2013 Shingler et al

    Overview and statistical analysis of boundary layer clouds and precipitation over the western North Atlantic Ocean

    Get PDF
    Due to their fast evolution and large natural variability in macro- and microphysical properties, the accurate representation of boundary layer clouds in current climate models remains a challenge. One of the regions with large intermodel spread in the Coupled Model Intercomparison Project Phase 6 ensemble is the western North Atlantic Ocean. Here, statistically representative in situ measurements can help to develop and constrain the parameterization of clouds in global models. To this end, we performed comprehensive measurements of boundary layer clouds, aerosol, trace gases, and radiation in the western North Atlantic Ocean during the NASA Aerosol Cloud meTeorology Interactions oVer the western ATlantic Experiment (ACTIVATE) mission. In total, 174 research flights with 574 flight hours for cloud and precipitation measurements were performed with the HU-25 Falcon during three winter (February–March 2020, January–April 2021, and November 2021–March 2022) and three summer seasons (August–September 2020, May–June 2021, and May–June 2022). Here we present a statistical evaluation of 16 140 individual cloud events probed by the fast cloud droplet probe and the two-dimensional stereo cloud probe during 155 research flights in a representative and repetitive flight strategy allowing for robust statistical data analyses. We show that the vertical profiles of distributions of the liquid water content and the cloud droplet effective diameter (ED) increase with altitude in the marine boundary layer. Due to higher updraft speeds, higher cloud droplet number concentrations (Nliquid) were measured in winter compared to summer despite lower cloud condensation nucleus abundance. Flight cloud cover derived from statistical analysis of in situ data is reduced in summer and shows large variability. This seasonal contrast in cloud coverage is consistent with a dominance of a synoptic pattern in winter that favors conditions for the formation of stratiform clouds at the western edge of cyclones (post-cyclonic). In contrast, a dominant summer anticyclone is concomitant with the occurrence of shallow cumulus clouds and lower cloud coverage. The evaluation of boundary layer clouds and precipitation in the Nliquid ED phase space sheds light on liquid, mixed-phase, and ice cloud properties and helps to categorize the cloud data. Ice and liquid precipitation, often masked in cloud statistics by a high abundance of liquid clouds, is often observed throughout the cloud. The ACTIVATE in situ cloud measurements provide a wealth of cloud information useful for assessing airborne and satellite remote-sensing products, for global climate and weather model evaluations, and for dedicated process studies that address precipitation and aerosol–cloud interactions.</p
    corecore