89 research outputs found

    To Sleep, Perchance to Dream: Acute and Chronic Sleep Deprivation in Acute Care Surgeons

    Get PDF
    Background Acute and chronic sleep deprivation are significantly associated with depressive symptoms and felt to be contributors to the development of burnout. In-house call (IHC) inherently includes frequent periods of disrupted sleep and is common amongst acute care surgeons (ACS). The relationship between IHC and sleep deprivation (SD) amongst ACS has not been previously studied. The goal of this study was to determine prevalence and patterns of SD in ACS. Study Design: A prospective study of ACS with IHC responsibilities from two Level I trauma centers was performed. Participants wore a sleep tracking device continuously over a 3-month period. Data collected included age, gender, schedule of IHC, hours and pattern of each sleep stage (light, slow wave (SWS), and REM), and total hours of sleep. Sleep patterns were analyzed for each night excluding IHC and categorized as normal (N), acute sleep deprivation (ASD), or chronic sleep deprivation (CSD). Results 1421 nights were recorded amongst 17 ACS. (35.3% female; ages 37-65, mean 45.5 years). Excluding IHC, average amount of sleep was 6.54 hours with 64.8% of sleep patterns categorized as ASD or CSD. Average amount of sleep was significantly higher on post-call day 1 (6.96 hours, p=0.0016), but decreased significantly on post-call day 2 (6.33 hours, p=0.0006). Sleep patterns with ASD and CSD peaked on post-call day 2, and returned to baseline on post-call day 3 (p=0.046). Conclusion Sleep patterns consistent with ASD and CSD are common amongst ACS and worsen on post-call day 2. Baseline sleep patterns were not recovered until post-call day 3. Future study is needed to identify factors which impact physiologic recovery after IHC and further elucidate the relationship between SD and burnout

    Can Bangladesh produce enough cereals to meet future demand?

    Get PDF
    Bangladesh faces huge challenges in achieving food security due to its high population, diet changes, and limited room for expanding cropland and cropping intensity. The objective of this study is to assess the degree to which Bangladesh can be self-sufficient in terms of domestic maize, rice and wheat production by the years 2030 and 2050 by closing the existing gap (Yg) between yield potential (Yp) and actual farm yield (Ya), accounting for possible changes in cropland area. Yield potential and yield gaps were calculated for the three crops using well-validated crop models and site-specific weather,management and soil data, and upscaled to the whole country.We assessed potential grain production in the years 2030 and 2050 for six land use change scenarios (general decrease in arable land; declining ground water tables in the north; cropping of fallow areas in the south; effect of sea level rise; increased cropping intensity; and larger share of cash crops) and three levels of Yg closure (1: no yield increase; 2: Yg closure at a level equivalent to 50% (50% Yg closure); 3: Yg closure to a level of 85% of Yp (irrigated crops) and 80% of water-limited yield potential or Yw (rainfed crops) (full Yg closure)). In addition, changes in demand with low and high population growth rates, and substitution of rice by maize in future diets were also examined. Total aggregated demand of the three cereals (in milled rice equivalents) in 2030 and 2050, based on the UN median population variant, is projected to be 21 and 24% higher than in 2010. Current Yg represent 50% (irrigated rice), 48–63% (rainfed rice), 49% (irrigated wheat), 40% (rainfed wheat), 46% (irrigated maize), and 44% (rainfed maize) of their Yp or Yw.With 50% Yg closure and for various land use changes, self-sufficiency ratio will be N1 for rice in 2030 and about one in 2050 but well below one for maize and wheat in both 2030 and 2050. With full Yg closure, self-sufficiency ratios will be well above one for rice and all three cereals jointly but below one for maize and wheat for all scenarios, except for the scenario with drastic decrease in boro rice area to allow for area expansion for cash crops. Full Yg closure of all cereals is needed to compensate for area decreases and demand increases, and then even some maize and large amounts of wheat imports will be required to satisfy demand in future. The results of this analysis have important implications for Bangladesh and other countries with high population growth rate, shrinking arable land due to rapid urbanization, and highly vulnerable to climate change

    Financial Toxicity Is Associated With Worse Physical and Emotional Long-term Outcomes After Traumatic Injury

    Get PDF
    Background Increasing healthcare costs and high deductible insurance plans have shifted more responsibility for medical costs to patients. After serious illnesses, financial responsibilities may result in lost wages, forced unemployment, and other financial burdens, collectively described as financial toxicity. Following cancer treatments, financial toxicity is associated with worse long-term health related quality of life outcomes (HRQOL). The purpose of this study was to determine the incidence of financial toxicity following injury, factors associated with financial toxicity, and the impact of financial toxicity on long-term HRQOL. Methods Adult patients with an injury severity score of 10 or greater and without head or spinal cord injury were prospectively followed for 1 year. The Short-Form-36 was used to determine overall quality of life at 1, 2, 4 and 12 months. Screens for depression and post-traumatic stress syndrome (PTSD) were administered. The primary outcome was any financial toxicity. A multivariable generalized estimating equation was used to account for variability over time. Results 500 patients were enrolled and 88% suffered financial toxicity during the year following injury (64% reduced income, 58% unemployment, 85% experienced stress due to financial burden). Financial toxicity remained stable over follow-up (80–85%). Factors independently associated with financial toxicity were lower age (OR 0.96 [0.94–0.98]), and lack of health insurance (OR 0.28 [0.14–0.56]) and larger household size (OR 1.37 [1.06–1.77]). After risk adjustment, patients with financial toxicity had worse HRQOL, and more depression and PTSD in a step-wise fashion based on severity of financial toxicity. Conclusions Financial toxicity following injury is extremely common and is associated with worse psychological and physical outcomes. Age, lack of insurance, and large household size are associated with financial toxicity. Patients at risk for financial toxicity can be identified and interventions to counteract the negative effects should be developed to improve long-term outcomes. Level of Evidence Prognostic/epidemiologic study, level II

    Self-interrupted synthesis of sterically hindered aliphatic polyamide dendrimers

    Get PDF
    Hydrolytically and enzymatically stable nanoscale synthetic constructs, with well-defined structures that exhibit antimicrobial activity, offer exciting possibilities for diverse applications in the emerging field of nanomedicine. Herein, we demonstrate that it is the core conformation, rather than periodicity, that ultimately controls the synthesis of sterically hindered aliphatic polyamide dendrimers. The latter self-interrupt at a predictable low generation number due to backfolding of their peripheral groups, which in turn leads to well-defined nanoarchitectures

    Large multi-ethnic genetic analyses of amyloid imaging identify new genes for Alzheimer disease

    Get PDF
    Amyloid PET imaging has been crucial for detecting the accumulation of amyloid beta (Aβ) deposits in the brain and to study Alzheimer\u27s disease (AD). We performed a genome-wide association study on the largest collection of amyloid imaging data (N = 13,409) to date, across multiple ethnicities from multicenter cohorts to identify variants associated with brain amyloidosis and AD risk. We found a strong APOE signal on chr19q.13.32 (top SNP: APOE ɛ4; rs429358; β = 0.35, SE = 0.01, P = 6.2 × 1

    A Competency-based Laparoscopic Cholecystectomy Curriculum Significantly Improves General Surgery Residents’ Operative Performance and Decreases Skill Variability: Cohort Study

    Get PDF
    Objective: To demonstrate the feasibility of implementing a CBE curriculum within a general surgery residency program and to evaluate its effectiveness in improving resident skill. Summary of Background Data: Operative skill variability affects residents and practicing surgeons and directly impacts patient outcomes. CBE can decrease this variability by ensuring uniform skill acquisition. We implemented a CBE LC curriculum to improve resident performance and decrease skill variability. Methods: PGY-2 residents completed the curriculum during monthly rotations starting in July 2017. Once simulator proficiency was reached, residents performed elective LCs with a select group of faculty at 3 hospitals. Performance at curriculum completion was assessed using LC simulation metrics and intraoperative operative performance rating system scores and compared to both baseline and historical controls, comprised of rising PGY-3s, using a 2-sample Wilcoxon rank-sum test. PGY-2 group’s performance variability was compared with PGY-3s using Levene robust test of equality of variances; P < 0.05 was considered significant. Results: Twenty-one residents each performed 17.52 ± 4.15 consecutive LCs during the monthly rotation. Resident simulated and operative performance increased significantly with dedicated training and reached that of more experienced rising PGY-3s (n = 7) but with significantly decreased variability in performance (P = 0.04). Conclusions: Completion of a CBE rotation led to significant improvements in PGY-2 residents’ LC performance that reached that of PGY-3s and decreased performance variability. These results support wider implementation of CBE in resident training

    Clinical and Laboratory characteristics of patients with COVID-19 Infection and Deep Venous Thrombosis

    Get PDF
    Objective: Early reports suggest that patients with novel coronavirus disease-2019 (COVID-19) infection carry a significant risk of altered coagulation with an increased risk for venous thromboembolic events. This report investigates the relationship of significant COVID-19 infection and deep venous thrombosis (DVT) as reflected in the patient clinical and laboratory characteristics. Methods: We reviewed the demographics, clinical presentation, laboratory and radiologic evaluations, results of venous duplex imaging and mortality of COVID-19-positive patients (18-89 years) admitted to the Indiana University Academic Health Center. Using oxygen saturation, radiologic findings, and need for advanced respiratory therapies, patients were classified into mild, moderate, or severe categories of COVID-19 infection. A descriptive analysis was performed using univariate and bivariate Fisher's exact and Wilcoxon rank-sum tests to examine the distribution of patient characteristics and compare the DVT outcomes. A multivariable logistic regression model was used to estimate the adjusted odds ratio of experiencing DVT and a receiver operating curve analysis to identify the optimal cutoff for d-dimer to predict DVT in this COVID-19 cohort. Time to the diagnosis of DVT from admission was analyzed using log-rank test and Kaplan-Meier plots. Results: Our study included 71 unique COVID-19-positive patients (mean age, 61 years) categorized as having 3% mild, 14% moderate, and 83% severe infection and evaluated with 107 venous duplex studies. DVT was identified in 47.8% of patients (37% of examinations) at an average of 5.9 days after admission. Patients with DVT were predominantly male (67%; P = .032) with proximal venous involvement (29% upper and 39% in the lower extremities with 55% of the latter demonstrating bilateral involvement). Patients with DVT had a significantly higher mean d-dimer of 5447 ± 7032 ng/mL (P = .0101), and alkaline phosphatase of 110 IU/L (P = .0095) than those without DVT. On multivariable analysis, elevated d-dimer (P = .038) and alkaline phosphatase (P = .021) were associated with risk for DVT, whereas age, sex, elevated C-reactive protein, and ferritin levels were not. A receiver operating curve analysis suggests an optimal d-dimer value of 2450 ng/mL cutoff with 70% sensitivity, 59.5% specificity, and 61% positive predictive value, and 68.8% negative predictive value. Conclusions: This study suggests that males with severe COVID-19 infection requiring hospitalization are at highest risk for developing DVT. Elevated d-dimers and alkaline phosphatase along with our multivariable model can alert the clinician to the increased risk of DVT requiring early evaluation and aggressive treatmen

    Linking Structural Racism and Discrimination and Breast Cancer Outcomes: A Social Genomics Approach

    Get PDF
    We live in a society where individuals and communities are marginalized because of their race or ethnicity. This structural inequity extracts enormous health and societal costs, decreasing access to cancer care and increasing health disparities, especially among the most vulnerable. In an effort to identify causes of disparities, we have incorporated individual sociodemographic characteristics (eg, income and education) and other social determinants of health (eg, access to care, insurance, and transportation needs), as well as biologic markers (eg, genetic predisposition to disease) that can serve as therapeutic targets into our research

    Slip pulse and resonance of the Kathmandu basin during the 2015 Gorkha earthquake, Nepal.

    Get PDF
    This is the author accepted manuscript. The final version is available from AAAS via http://dx.doi.org/10.1126/science.aac6383Detailed geodetic imaging of earthquake ruptures enhances our understanding of earthquake physics and associated ground shaking. The 25 April 2015 moment magnitude 7.8 earthquake in Gorkha, Nepal was the first large continental megathrust rupture to have occurred beneath a high-rate (5-hertz) Global Positioning System (GPS) network. We used GPS and interferometric synthetic aperture radar data to model the earthquake rupture as a slip pulse ~20 kilometers in width, ~6 seconds in duration, and with a peak sliding velocity of 1.1 meters per second, which propagated toward the Kathmandu basin at ~3.3 kilometers per second over ~140 kilometers. The smooth slip onset, indicating a large (~5-meter) slip-weakening distance, caused moderate ground shaking at high frequencies (>1 hertz; peak ground acceleration, ~16% of Earth's gravity) and minimized damage to vernacular dwellings. Whole-basin resonance at a period of 4 to 5 seconds caused the collapse of tall structures, including cultural artifacts.The Nepal Geodetic Array was funded by internal funding to JPA from Caltech and DASE and by the Gordon and Betty Moore Foundation, through Grant GBMF 423.01 to the Caltech Tectonics Observatory and was maintained thanks to NSF Grant EAR 13-5136. Andrew Miner and the PAcific Northwest Geodetic Array (PANGA) at Central Washington University are thanked for technical assistance with the construction and operation of the Tribhuvan University-CWU network. Additional funding for the TU-CWU network came from United Nations Development Programme and Nepal Academy for Science and Technology. The high rate data were recovered thanks to a rapid intervention funded by NASA (US) and the Department of Foreign International Development (UK). We thank Trimble Navigation Ltd and the Vaidya family for supporting the rapid response as well. The accelerometer record at KATNP was provided by USGS. Research at UC Berkeley was funded by the Gordon and Betty Moore Foundation through grant GBMF 3024. A portion of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The GPS data were processed by ARIA (JPL) and the Scripps Orbit and Permanent Array Center. The effort at the Scripps Institution of Oceanography was funded by NASA grants NNX14AQ53G and NNX14AT33G. ALOS-2 data were provided under JAXA (Japan) PI Investigations 1148 and 1413. JPA thanks the Royal Society for support. We thank Susan Hough, Doug Given, Irving Flores and Jim Luetgert for contribution to the installation of this station

    The impacts of increased heat stress events on wheat yield under climate change in China

    Get PDF
    China is the largest wheat producing country in the world. Wheat is one of the two major staple cereals consumed in the country and about 60% of Chinese population eats the grain daily. To safeguard the production of this important crop, about 85% of wheat areas in the country are under irrigation or high rainfall conditions. However, wheat production in the future will be challenged by the increasing occurrence and magnitude of adverse and extreme weather events. In this paper, we present an analysis that combines outputs from a wide range of General Circulation Models (GCMs) with observational data to produce more detailed projections of local climate suitable for assessing the impact of increasing heat stress events on wheat yield. We run the assessment at 36 representative sites in China using the crop growth model CSM-CropSim Wheat of DSSAT 4.5. The simulations based on historical data show that this model is suitable for quantifying yield damages caused by heat stress. In comparison with the observations of baseline 1996-2005, our simulations for the future indicate that by 2100, the projected increases in heat stress would lead to an ensemble-mean yield reduction of –7.1% (with a probability of 80%) and –17.5% (with a probability of 96%) for winter wheat and spring wheat, respectively, under the irrigated condition. Although such losses can be fully compensated by CO2 fertilization effect as parameterized in DSSAT 4.5, a great caution is needed in interpreting this fertilization effect because existing crop dynamic models are unable to incorporate the effect of CO2 acclimation (the growth enhancing effect decreases over time) and other offsetting forces
    • …
    corecore