27 research outputs found
Predictors and Temporal Trends of Withdrawal of Life-Sustaining Therapy After Acute Stroke in the Florida Stroke Registry
UNLABELLED: Temporal trends and factors associated with the withdrawal of life-sustaining therapy (WLST) after acute stroke are not well determined.
DESIGN: Observational study (2008-2021).
SETTING: Florida Stroke Registry (152 hospitals).
PATIENTS: Acute ischemic stroke (AIS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH) patients.
INTERVENTIONS: None.
MEASUREMENTS AND MAIN RESULTS: Importance plots were performed to generate the most predictive factors of WLST. Area under the curve (AUC) for the receiver operating curve were generated for the performance of logistic regression (LR) and random forest (RF) models. Regression analysis was applied to evaluate temporal trends. Among 309,393 AIS patients, 47,485 ICH patients, and 16,694 SAH patients; 9%, 28%, and 19% subsequently had WLST. Patients who had WLST were older (77 vs 70 yr), more women (57% vs 49%), White (76% vs 67%), with greater stroke severity on the National Institutes of Health Stroke Scale greater than or equal to 5 (29% vs 19%), more likely hospitalized in comprehensive stroke centers (52% vs 44%), had Medicare insurance (53% vs 44%), and more likely to have impaired level of consciousness (38% vs 12%). Most predictors associated with the decision to WLST in AIS were age, stroke severity, region, insurance status, center type, race, and level of consciousness (RF AUC of 0.93 and LR AUC of 0.85). Predictors in ICH included age, impaired level of consciousness, region, race, insurance status, center type, and prestroke ambulation status (RF AUC of 0.76 and LR AUC of 0.71). Factors in SAH included age, impaired level of consciousness, region, insurance status, race, and stroke center type (RF AUC of 0.82 and LR AUC of 0.72). Despite a decrease in the rates of early WLST (\u3c 2 d) and mortality, the overall rates of WLST remained stable.
CONCLUSIONS: In acute hospitalized stroke patients in Florida, factors other than brain injury alone contribute to the decision to WLST. Potential predictors not measured in this study include education, culture, faith and beliefs, and patient/family and physician preferences. The overall rates of WLST have not changed in the last 2 decades
ZikaPLAN: addressing the knowledge gaps and working towards a research preparedness network in the Americas.
Zika Preparedness Latin American Network (ZikaPLAN) is a research consortium funded by the European Commission to address the research gaps in combating Zika and to establish a sustainable network with research capacity building in the Americas. Here we present a report on ZikaPLAN`s mid-term achievements since its initiation in October 2016 to June 2019, illustrating the research objectives of the 15 work packages ranging from virology, diagnostics, entomology and vector control, modelling to clinical cohort studies in pregnant women and neonates, as well as studies on the neurological complications of Zika infections in adolescents and adults. For example, the Neuroviruses Emerging in the Americas Study (NEAS) has set up more than 10 clinical sites in Colombia. Through the Butantan Phase 3 dengue vaccine trial, we have access to samples of 17,000 subjects in 14 different geographic locations in Brazil. To address the lack of access to clinical samples for diagnostic evaluation, ZikaPLAN set up a network of quality sites with access to well-characterized clinical specimens and capacity for independent evaluations. The International Committee for Congenital Anomaly Surveillance Tools was formed with global representation from regional networks conducting birth defects surveillance. We have collated a comprehensive inventory of resources and tools for birth defects surveillance, and developed an App for low resource regions facilitating the coding and description of all major externally visible congenital anomalies including congenital Zika syndrome. Research Capacity Network (REDe) is a shared and open resource centre where researchers and health workers can access tools, resources and support, enabling better and more research in the region. Addressing the gap in research capacity in LMICs is pivotal in ensuring broad-based systems to be prepared for the next outbreak. Our shared and open research space through REDe will be used to maximize the transfer of research into practice by summarizing the research output and by hosting the tools, resources, guidance and recommendations generated by these studies. Leveraging on the research from this consortium, we are working towards a research preparedness network
TRY plant trait database – enhanced coverage and open access
Plant traits - the morphological, anatomical, physiological, biochemical and phenological characteristics of plants - determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits - almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives
Effects of hospital facilities on patient outcomes after cancer surgery: an international, prospective, observational study
Background Early death after cancer surgery is higher in low-income and middle-income countries (LMICs) compared with in high-income countries, yet the impact of facility characteristics on early postoperative outcomes is unknown. The aim of this study was to examine the association between hospital infrastructure, resource availability, and processes on early outcomes after cancer surgery worldwide.Methods A multimethods analysis was performed as part of the GlobalSurg 3 study-a multicentre, international, prospective cohort study of patients who had surgery for breast, colorectal, or gastric cancer. The primary outcomes were 30-day mortality and 30-day major complication rates. Potentially beneficial hospital facilities were identified by variable selection to select those associated with 30-day mortality. Adjusted outcomes were determined using generalised estimating equations to account for patient characteristics and country-income group, with population stratification by hospital.Findings Between April 1, 2018, and April 23, 2019, facility-level data were collected for 9685 patients across 238 hospitals in 66 countries (91 hospitals in 20 high-income countries; 57 hospitals in 19 upper-middle-income countries; and 90 hospitals in 27 low-income to lower-middle-income countries). The availability of five hospital facilities was inversely associated with mortality: ultrasound, CT scanner, critical care unit, opioid analgesia, and oncologist. After adjustment for case-mix and country income group, hospitals with three or fewer of these facilities (62 hospitals, 1294 patients) had higher mortality compared with those with four or five (adjusted odds ratio [OR] 3.85 [95% CI 2.58-5.75]; p<0.0001), with excess mortality predominantly explained by a limited capacity to rescue following the development of major complications (63.0% vs 82.7%; OR 0.35 [0.23-0.53]; p<0.0001). Across LMICs, improvements in hospital facilities would prevent one to three deaths for every 100 patients undergoing surgery for cancer.Interpretation Hospitals with higher levels of infrastructure and resources have better outcomes after cancer surgery, independent of country income. Without urgent strengthening of hospital infrastructure and resources, the reductions in cancer-associated mortality associated with improved access will not be realised
Hot Cognition: Effects of Emotion on Interference Resolution in Working Memory
Brain, Behavior, and Cognitive Scienceshttp://deepblue.lib.umich.edu/bitstream/2027.42/85300/1/nmassad.pd
Recommended from our members
New-onset super-refractory status epilepticus A case series of 26 patients
Objective
To better understand the heterogeneous population of patients with new-onset refractory status epilepticus (NORSE), we studied the most severe cases in patients who presented with new-onset super-refractory status epilepticus (NOSRSE).
Methods
We report a retrospective case series of 26 adults admitted to the Columbia University Irving Medical Center neurologic intensive care unit (NICU) from February 2009 to February 2016 with NOSRSE. We evaluated demographics, diagnostic studies, and treatment course. Outcomes were modified Rankin Scale score (mRS) at hospital discharge and most recent followup visit (minimum of 2 months post discharge), NICU and hospital length of stay, and longterm antiepileptic drug use.
Results
Of the 252 patients with refractory status epilepticus, 27/252 had NORSE and 26/27 of those had NOSRSE. Age was bimodally distributed with peaks at 27 and 63 years. The majority (96%) had an infectious or psychiatric prodrome. Etiology was cryptogenic in 73%, autoimmune in 19%, and infectious in 8%. Seven patients (27%) underwent brain biopsy, autopsy, or both; 3 (12%) were diagnostic (herpes simplex encephalitis, candida encephalitis, and acute demyelinating encephalomyelitis). On discharge, 6 patients (23%) had good or fair outcome (mRS 0-3). Of the patients with long-term follow-up data (median 9 months, interquartile range 2-22 months), 12 patients (71%) had mRS 0-3.
Conclusion
Among our cohort, nearly all patients with NORSE had NOSRSE. The majority were cryptogenic with few antibody-positive cases identified. Neuropathology was diagnostic in 12% of cases. Although only 23% of patients had good or fair outcome on discharge, 71% met these criteria at follow-up
Recommended from our members
Implementing a Bedside Percutaneous Tracheostomy and Ultrasound Gastrostomy Team Reduces Length of Stay and Hospital Costs Across Multiple Critical Care Units in a 1500 Bed Tertiary Care Center
Background Thousands of critically ill patients every year in the United States receive tracheostomy and gastrostomy procedures. Recent research has investigated the benefits of a combined team approach to these procedures, with associated decreases in length of stay (LOS) and hospital costs. This study's objective was to determine if implementing a bedside percutaneous tracheostomy and percutaneous ultrasound gastrostomy (PUG) team would reduce LOS and hospital costs. Design and Methods: This retrospective chart review compares the impact of implementing an ICU bedside percutaneous tracheostomy and PUG service team to the hospital's previous workflow (ie, pre-implementation). Inclusion criteria were adult patients with Ventilator Dependent Respiratory Failure (VDRF), a clinical indication for both procedures while admitted to the ICU and received both tracheostomy and gastrostomy procedures while admitted to the hospital. Pre- and post-implementation groups were compared across patients’ demographics, clinical characteristics, and outcomes. ICU LOS, hospital LOS and total hospital costs were the primary outcome measures. Results: A total of 101 adult critically ill patients were included in the analysis; 49 patients were in the pre-implementation group and 52 patients in the post-implementation group (ie, PUG group). Patients in the PUG group had a significantly shorter mean ICU LOS and hospital LOS, 10.9- and 14.7-day reductions respectively (p = 0.010, p = 0.006). PUG group patients also had a significant reduction in total hospital costs, a per patient cost savings of $34 778 (p = 0.043). Conclusions: This study supports implementing a bedside percutaneous tracheostomy and PUG team to reduce LOS and total hospital costs in patients with VDRF
Recommended from our members
Confidence in a Crisis: A Pilot Multi-disciplinary Stroke Alert Simulation for First-year Neurology Residents (P8-7.006)
Abstract onl
Recommended from our members
Percutaneous Ultrasound-guided Placement of Gastrostomy Tube in Neurocritically Ill Patient (P2-2.004)
Abstract onl
Recommended from our members
Quantitative EEG-Based Seizure Estimation in Super-Refractory Status Epilepticus
The objective of this study was to evaluate the accuracy of seizure burden in patients with super-refractory status epilepticus (SRSE) by using quantitative electroencephalography (qEEG).
EEG recordings from 69 patients with SRSE (2009-2019) were reviewed and annotated for seizures by three groups of reviewers: two board-certified neurophysiologists using only raw EEG (gold standard), two neurocritical care providers with substantial experience in qEEG analysis (qEEG experts), and two inexperienced qEEG readers (qEEG novices) using only a qEEG trend panel.
Raw EEG experts identified 35 (51%) patients with seizures, accounting for 2950 seizures (3,126 min). qEEG experts had a sensitivity of 93%, a specificity of 61%, a false positive rate of 6.5 per day, and good agreement (κ = 0.64) between both qEEG experts. qEEG novices had a sensitivity of 98.5%, a specificity of 13%, a false positive rate of 15 per day, and fair agreement (κ = 0.4) between both qEEG novices. Seizure burden was not different between the qEEG experts and the gold standard (3,257 vs. 3,126 min), whereas qEEG novices reported higher burden (6066 vs. 3126 min).
Both qEEG experts and novices had a high sensitivity but a low specificity for seizure detection in patients with SRSE. qEEG could be a useful tool for qEEG experts to estimate seizure burden in patients with SRSE