18 research outputs found
Sex difference and intra-operative tidal volume: Insights from the LAS VEGAS study
BACKGROUND: One key element of lung-protective ventilation is the use of a low tidal volume (VT). A sex difference in use of low tidal volume ventilation (LTVV) has been described in critically ill ICU patients.OBJECTIVES: The aim of this study was to determine whether a sex difference in use of LTVV also exists in operating room patients, and if present what factors drive this difference.DESIGN, PATIENTS AND SETTING: This is a posthoc analysis of LAS VEGAS, a 1-week worldwide observational study in adults requiring intra-operative ventilation during general anaesthesia for surgery in 146 hospitals in 29 countries.MAIN OUTCOME MEASURES: Women and men were compared with respect to use of LTVV, defined as VT of 8 ml kg-1 or less predicted bodyweight (PBW). A VT was deemed 'default' if the set VT was a round number. A mediation analysis assessed which factors may explain the sex difference in use of LTVV during intra-operative ventilation.RESULTS: This analysis includes 9864 patients, of whom 5425 (55%) were women. A default VT was often set, both in women and men; mode VT was 500 ml. Median [IQR] VT was higher in women than in men (8.6 [7.7 to 9.6] vs. 7.6 [6.8 to 8.4] ml kg-1 PBW, P < 0.001). Compared with men, women were twice as likely not to receive LTVV [68.8 vs. 36.0%; relative risk ratio 2.1 (95% CI 1.9 to 2.1), P < 0.001]. In the mediation analysis, patients' height and actual body weight (ABW) explained 81 and 18% of the sex difference in use of LTVV, respectively; it was not explained by the use of a default VT.CONCLUSION: In this worldwide cohort of patients receiving intra-operative ventilation during general anaesthesia for surgery, women received a higher VT than men during intra-operative ventilation. The risk for a female not to receive LTVV during surgery was double that of males. Height and ABW were the two mediators of the sex difference in use of LTVV.TRIAL REGISTRATION: The study was registered at Clinicaltrials.gov, NCT01601223
Design and baseline characteristics of the finerenone in reducing cardiovascular mortality and morbidity in diabetic kidney disease trial
Background: Among people with diabetes, those with kidney disease have exceptionally high rates of cardiovascular (CV) morbidity and mortality and progression of their underlying kidney disease. Finerenone is a novel, nonsteroidal, selective mineralocorticoid receptor antagonist that has shown to reduce albuminuria in type 2 diabetes (T2D) patients with chronic kidney disease (CKD) while revealing only a low risk of hyperkalemia. However, the effect of finerenone on CV and renal outcomes has not yet been investigated in long-term trials.
Patients and Methods: The Finerenone in Reducing CV Mortality and Morbidity in Diabetic Kidney Disease (FIGARO-DKD) trial aims to assess the efficacy and safety of finerenone compared to placebo at reducing clinically important CV and renal outcomes in T2D patients with CKD. FIGARO-DKD is a randomized, double-blind, placebo-controlled, parallel-group, event-driven trial running in 47 countries with an expected duration of approximately 6 years. FIGARO-DKD randomized 7,437 patients with an estimated glomerular filtration rate >= 25 mL/min/1.73 m(2) and albuminuria (urinary albumin-to-creatinine ratio >= 30 to <= 5,000 mg/g). The study has at least 90% power to detect a 20% reduction in the risk of the primary outcome (overall two-sided significance level alpha = 0.05), the composite of time to first occurrence of CV death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for heart failure.
Conclusions: FIGARO-DKD will determine whether an optimally treated cohort of T2D patients with CKD at high risk of CV and renal events will experience cardiorenal benefits with the addition of finerenone to their treatment regimen.
Trial Registration: EudraCT number: 2015-000950-39; ClinicalTrials.gov identifier: NCT02545049
Føre-var, etter-snar eller på-stedet-hvil?
Rapporten drøfter i hvilken grad og på hvilken måte offentlige myndigheter vurderer det å ta kostnader til gjenoppbygging av offentlig fysisk infrastruktur (vann og avløp, veier, bygninger og havneanlegg) etter værrelaterte naturskadehendelser opp mot det å gjennomføre forebyggende tiltak. En generell erfaring er at kommunene i langt mindre grad enn staten opplever å ha økonomiske rammer til å investere i forebyggingstiltak etter at naturskadehendelser har rammet offentlig infrastruktur. Ofte velger man bare å gjenoppbygge til tilstanden som var før naturskadehendelsen inntraff. Rapporten drøfter en rekke forhold som er med å forklare dette forholdet og peker på tiltak for å styrke kommunenes og statens arbeid med å tilpasse offentlig infrastruktur til konsekvensene av forventede klimaendringer. Den viktigste barrieren er manglende vilje til å sette av tilstrekkelige ressurser til forebygging. Rapporten angir også en metode for å vurdere kostnader til forebygging opp mot det å fortsette å ta gjenoppbyggingskostnader
Water Adaptive Limber Locomotive Effector (WALL-E)
There are many celestial bodies in the Solar System that have the potential for harboring life such as the moons Europa and Enceladus; these worlds hide away vast oceans under thick layers of ice. The potential for these bodies to contain other lifeforms has piqued the interest of organizations on Earth, such as the National Aeronautics and Space Administration (NASA), as destinations for future missions. Because of the distances and relatively harsh conditions involved, Remotely Operated Vehicles (ROVs) would be sent on the initial missions to explore these worlds. The NASA Jet Propulsion Laboratory (JPL) has developed a remotely-operated Mini-Arm for use on an ROV. This mini arm would be used to explore the oceans of these distant worlds. However, it is in need of an end effector capable of manipulating objects of interest; this was the task of the Boise State University Microgravity Team. During the course of the 2018-2019 school year, the team designed and fabricated WALL-E as a flexible and dexterous solution to subsurface gripping. The design, degrees of freedom, and simple user interface allow the operator to easily manipulate samples of varying dimensions and geometries, akin to those potentially found on the aforementioned ocean worlds
Biological geography of the European seas: results from the MacroBen database
This study examines whether or not biogeographical and/or managerial divisions across the European seas can be validated using soft-bottom macrobenthic community data. The faunal groups used were: all macrobenthos groups, polychaetes, molluscs, crustaceans, echinoderms, sipunculans and the last 5 groups combined. In order to test the discriminating power of these groups, 3 criteria were used: (1) proximity, which refers to the expected closer faunal resemblance of adjacent areas relative to more distant ones; (2) randomness, which in the present context is a measure of the degree to which the inventories of the various sectors, provinces or regions may in each case be considered as a random sample of the inventory of the next largest province or region in a hierarchy of geographic scales; and (3) differentiation, which provides a measure of the uniqueness of the pattern. Results show that only polychaetes fulfill all 3 criteria and that the only marine biogeographic system supported by the analyses is the one proposed by Longhurst (1998). Energy fluxes and other interactions between the planktonic and benthic domains, acting over evolutionary time scales, can be associated with the multivariate pattern derived from the macrobenthos datasets. Third-stage multidimensional scaling ordination reveals that polychaetes produce a unique pattern when all systems are under consideration. Average island distance from the nearest coast, number of islands and the island surface area were the geographic variables best correlated with the community patterns produced by polychaetes. Biogeographic patterns suggest a vicariance model dominating over the founder-dispersal model except for the semi-closed regional seas, where a model substantially modified from the second option could be supported
An Integrated Assessment of changes in the thermohaline circulation
This paper discusses the risks of a shutdown of the thermohaline circulation (THC) for the climate system, for ecosystems in and around the North Atlantic as well as for fisheries and agriculture by way of an Integrated Assessment. The climate model simulations are based on greenhouse gas scenarios for the 21st century and beyond. A shutdown of the THC, complete by 2150, is triggered if increased freshwater input from inland ice melt or enhanced runoff is assumed. The shutdown retards the greenhouse gas-induced atmospheric warming trend in the Northern Hemisphere, but does not lead to a persistent net cooling. Due to the simulated THC shutdown the sea level at the North Atlantic shores rises by up to 80 cm by 2150, in addition to the global sea level rise. This could potentially be a serious impact that requires expensive coastal protection measures. A reduction of marine net primary productivity is associated with the impacts of warming rather than a THC shutdown. Regional shifts in the currents in the Nordic Seas could strongly deteriorate survival chances for cod larvae and juveniles. This could lead to cod fisheries becoming unprofitable by the end of the 21st century. While regional socioeconomic impacts might be large, damages would be probably small in relation to the respective gross national products. Terrestrial ecosystem productivity is affected much more by the fertilization from the increasing CO2 concentration than by a THC shutdown. In addition, the level of warming in the 22nd to 24th century favours crop production in northern Europe a lot, no matter whether the THC shuts down or not. CO2 emissions corridors aimed at limiting the risk of a THC breakdown to 10% or less are narrow, requiring departure from business-as-usual in the next few decades. The uncertainty about THC risks is still high. This is seen in model analyses as well as in the experts’ views that were elicited. The overview of results presented here is the outcome of the Integrated Assessment project INTEGRATION
Planck 2018 results: III. High Frequency Instrument data processing and frequency maps
This paper presents the High Frequency Instrument (HFI) data processing procedures for the Planck 2018 release. Major improvements in mapmaking have been achieved since the previous Planck 2015 release, many of which were used and described already in an intermediate paper dedicated to the Planck polarized data at low multipoles. These improvements enabled the first significant measurement of the reionization optical depth parameter using Planck -HFI data. This paper presents an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals. The polarized data, which presented a number of known problems in the 2015 Planck release, are very significantly improved, especially the leakage from intensity to polarization. Calibration, based on the cosmic microwave background (CMB) dipole, is now extremely accurate and in the frequency range 100–353 GHz reduces intensity-to-polarization leakage caused by calibration mismatch. The Solar dipole direction has been determined in the three lowest HFI frequency channels to within one arc minute, and its amplitude has an absolute uncertainty smaller than 0.35 μ K, an accuracy of order 10 −4 . This is a major legacy from the Planck HFI for future CMB experiments. The removal of bandpass leakage has been improved for the main high-frequency foregrounds by extracting the bandpass-mismatch coefficients for each detector as part of the mapmaking process; these values in turn improve the intensity maps. This is a major change in the philosophy of “frequency maps”, which are now computed from single detector data, all adjusted to the same average bandpass response for the main foregrounds. End-to-end simulations have been shown to reproduce very well the relative gain calibration of detectors, as well as drifts within a frequency induced by the residuals of the main systematic effect (analogue-to-digital convertor non-linearity residuals). Using these simulations, we have been able to measure and correct the small frequency calibration bias induced by this systematic effect at the 10 −4 level. There is no detectable sign of a residual calibration bias between the first and second acoustic peaks in the CMB channels, at the 10 −3 level
Planck 2018 results: VIII. Gravitational lensing
We present measurements of the cosmic microwave background (CMB) lensing potential using the final Planck 2018 temperature and polarization data. Using polarization maps filtered to account for the noise anisotropy, we increase the significance of the detection of lensing in the polarization maps from 5 σ to 9 σ . Combined with temperature, lensing is detected at 40 σ . We present an extensive set of tests of the robustness of the lensing-potential power spectrum, and construct a minimum-variance estimator likelihood over lensing multipoles 8 ≤ L ≤ 400 (extending the range to lower L compared to 2015), which we use to constrain cosmological parameters. We find good consistency between lensing constraints and the results from the Planck CMB power spectra within the ΛCDM model. Combined with baryon density and other weak priors, the lensing analysis alone constrains σ 8 Ω m 0.25 = 0.589 ± 0.020 (1 σ errors). Also combining with baryon acoustic oscillation data, we find tight individual parameter constraints, σ 8 = 0.811 ± 0.019, H 0 = 67.9 −1.3 +1.2 km s −1 Mpc −1 , and Ω m = 0.303 −0.018 +0.016 . Combining with Planck CMB power spectrum data, we measure σ 8 to better than 1% precision, finding σ 8 = 0.811 ± 0.006. CMB lensing reconstruction data are complementary to galaxy lensing data at lower redshift, having a different degeneracy direction in σ 8 − Ω m space; we find consistency with the lensing results from the Dark Energy Survey, and give combined lensing-only parameter constraints that are tighter than joint results using galaxy clustering. Using the Planck cosmic infrared background (CIB) maps as an additional tracer of high-redshift matter, we make a combined Planck -only estimate of the lensing potential over 60% of the sky with considerably more small-scale signal. We additionally demonstrate delensing of the Planck power spectra using the joint and individual lensing potential estimates, detecting a maximum removal of 40% of the lensing-induced power in all spectra. The improvement in the sharpening of the acoustic peaks by including both CIB and the quadratic lensing reconstruction is detected at high significance
Planck 2018 results: V. CMB power spectra and likelihoods
We describe the legacy Planck cosmic microwave background (CMB) likelihoods derived from the 2018 data release. The overall approach is similar in spirit to the one retained for the 2013 and 2015 data release, with a hybrid method using different approximations at low ( ℓ < 30) and high ( ℓ ≥ 30) multipoles, implementing several methodological and data-analysis refinements compared to previous releases. With more realistic simulations, and better correction and modelling of systematic effects, we can now make full use of the CMB polarization observed in the High Frequency Instrument (HFI) channels. The low-multipole EE cross-spectra from the 100 GHz and 143 GHz data give a constraint on the ΛCDM reionization optical-depth parameter τ to better than 15% (in combination with the TT low- ℓ data and the high- ℓ temperature and polarization data), tightening constraints on all parameters with posterior distributions correlated with τ . We also update the weaker constraint on τ from the joint TEB likelihood using the Low Frequency Instrument (LFI) channels, which was used in 2015 as part of our baseline analysis. At higher multipoles, the CMB temperature spectrum and likelihood are very similar to previous releases. A better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (i.e., the polarization efficiencies) allow us to make full use of polarization spectra, improving the ΛCDM constraints on the parameters θ MC , ω c , ω b , and H 0 by more than 30%, and n s by more than 20% compared to TT-only constraints. Extensive tests on the robustness of the modelling of the polarization data demonstrate good consistency, with some residual modelling uncertainties. At high multipoles, we are now limited mainly by the accuracy of the polarization efficiency modelling. Using our various tests, simulations, and comparison between different high-multipole likelihood implementations, we estimate the consistency of the results to be better than the 0.5 σ level on the ΛCDM parameters, as well as classical single-parameter extensions for the joint likelihood (to be compared to the 0.3 σ levels we achieved in 2015 for the temperature data alone on ΛCDM only). Minor curiosities already present in the previous releases remain, such as the differences between the best-fit ΛCDM parameters for the ℓ < 800 and ℓ > 800 ranges of the power spectrum, or the preference for more smoothing of the power-spectrum peaks than predicted in ΛCDM fits. These are shown to be driven by the temperature power spectrum and are not significantly modified by the inclusion of the polarization data. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations
Planck 2018 results: XII. Galactic astrophysics using polarized dust emission
Observations of the submillimetre emission from Galactic dust, in both total intensity I and polarization, have received tremendous interest thanks to the Planck full-sky maps. In this paper we make use of such full-sky maps of dust polarized emission produced from the third public release of Planck data. As the basis for expanding on astrophysical studies of the polarized thermal emission from Galactic dust, we present full-sky maps of the dust polarization fraction p , polarization angle ψ , and dispersion function of polarization angles . The joint distribution (one-point statistics) of p and N H confirms that the mean and maximum polarization fractions decrease with increasing N H . The uncertainty on the maximum observed polarization fraction, p max = 22.0 −1.4 +3.5 % at 353 GHz and 80′ resolution, is dominated by the uncertainty on the Galactic emission zero level in total intensity, in particular towards diffuse lines of sight at high Galactic latitudes. Furthermore, the inverse behaviour between p and found earlier is seen to be present at high latitudes. This follows the ∝ p −1 relationship expected from models of the polarized sky (including numerical simulations of magnetohydrodynamical turbulence) that include effects from only the topology of the turbulent magnetic field, but otherwise have uniform alignment and dust properties. Thus, the statistical properties of p , ψ , and for the most part reflect the structure of the Galactic magnetic field. Nevertheless, we search for potential signatures of varying grain alignment and dust properties. First, we analyse the product map × p , looking for residual trends. While the polarization fraction p decreases by a factor of 3−4 between N H = 10 20 cm −2 and N H = 2 × 10 22 cm −2 , out of the Galactic plane, this product × p only decreases by about 25%. Because is independent of the grain alignment efficiency, this demonstrates that the systematic decrease in p with N H is determined mostly by the magnetic-field structure and not by a drop in grain alignment. This systematic trend is observed both in the diffuse interstellar medium (ISM) and in molecular clouds of the Gould Belt. Second, we look for a dependence of polarization properties on the dust temperature, as we would expect from the radiative alignment torque (RAT) theory. We find no systematic trend of × p with the dust temperature T d , whether in the diffuse ISM or in the molecular clouds of the Gould Belt. In the diffuse ISM, lines of sight with high polarization fraction p and low polarization angle dispersion tend, on the contrary, to have colder dust than lines of sight with low p and high . We also compare the Planck thermal dust polarization with starlight polarization data in the visible at high Galactic latitudes. The agreement in polarization angles is remarkable, and is consistent with what we expect from the noise and the observed dispersion of polarization angles in the visible on the scale of the Planck beam. The two polarization emission-to-extinction ratios, R P / p and R S/V , which primarily characterize dust optical properties, have only a weak dependence on the column density, and converge towards the values previously determined for translucent lines of sight. We also determine an upper limit for the polarization fraction in extinction, p V / E ( B − V ), of 13% at high Galactic latitude, compatible with the polarization fraction p ≈ 20% observed at 353 GHz. Taken together, these results provide strong constraints for models of Galactic dust in diffuse gas