326 research outputs found
The outcome of extubation failure in a community hospital intensive care unit: a cohort study
INTRODUCTION: Extubation failure has been associated with poor intensive care unit (ICU) and hospital outcomes in tertiary care medical centers. Given the large proportion of critical care delivered in the community setting, our purpose was to determine the impact of extubation failure on patient outcomes in a community hospital ICU. METHODS: A retrospective cohort study was performed using data gathered in a 16-bed medical/surgical ICU in a community hospital. During 30 months, all patients with acute respiratory failure admitted to the ICU were included in the source population if they were mechanically ventilated by endotracheal tube for more than 12 hours. Extubation failure was defined as reinstitution of mechanical ventilation within 72 hours (n = 60), and the control cohort included patients who were successfully extubated at 72 hours (n = 93). RESULTS: The primary outcome was total ICU length of stay after the initial extubation. Secondary outcomes were total hospital length of stay after the initial extubation, ICU mortality, hospital mortality, and total hospital cost. Patient groups were similar in terms of age, sex, and severity of illness, as assessed using admission Acute Physiology and Chronic Health Evaluation II score (P > 0.05). Both ICU (1.0 versus 10 days; P < 0.01) and hospital length of stay (6.0 versus 17 days; P < 0.01) after initial extubation were significantly longer in reintubated patients. ICU mortality was significantly higher in patients who failed extubation (odds ratio = 12.2, 95% confidence interval [CI] = 1.5–101; P < 0.05), but there was no significant difference in hospital mortality (odds ratio = 2.1, 95% CI = 0.8–5.4; P < 0.15). Total hospital costs (estimated from direct and indirect charges) were significantly increased by a mean of US22,573–45,280; P < 0.01). CONCLUSION: Extubation failure in a community hospital is univariately associated with prolonged inpatient care and significantly increased cost. Corroborating data from tertiary care centers, these adverse outcomes highlight the importance of accurate predictors of extubation outcome
Do Viruses Require the Cytoskeleton?
Background: It is generally thought that viruses require the cytoskeleton during their replication cycle. However, recent experiments in our laboratory with rubella virus, a member of the family Togaviridae (genus rubivirus), revealed that replication proceeded in the presence of drugs that inhibit microtubules. This study was done to expand on this observation. Findings: The replication of three diverse viruses, Sindbis virus (SINV; family Togaviridae family), vesicular stomatitis virus (VSV; family Rhabdoviridae), and Herpes simplex virus (family Herpesviridae), was quantified by the titer (plaque forming units/ml; pfu/ml) produced in cells treated with one of three anti-microtubule drugs (colchicine, noscapine, or paclitaxel) or the anti-actin filament drug, cytochalasin D. None of these drugs affected the replication these viruses. Specific steps in the SINV infection cycle were examined during drug treatment to determine if alterations in specific steps in the virus replication cycle in the absence of a functional cytoskeletal system could be detected, i.e. redistribution of viral proteins and replication complexes or increases/decreases in their abundance. These investigations revealed that the observable impacts were a colchicine-mediated fragmentation of the Golgi apparatus and concomitant intracellular redistribution of the virion structural proteins, along with a reduction in viral genome and sub-genome RNA levels, but not double-stranded RNA or protein levels. Conclusions: The failure of poisons affecting the cytoskeleton to inhibit the replication of a diverse set of viruses strongly suggests that viruses do not require a functional cytoskeletal system for replication, either because they do not utilize it or are able to utilize alternate pathways when it is not available
Personalized Depression Prevention: A Randomized Controlled Trial to Optimize Effects Through Risk-Informed Personalization
Objective: To evaluate whether evidence-based depression prevention programs can be optimized by matching youths to interventions that address their psychosocial vulnerabilities. Method: This randomized controlled trial included 204 adolescents (mean [SD] age ¼ 14.26 [1.65] years; 56.4% female). Youths were categorized as high or low on cognitive and interpersonal risks for depression and randomly assigned to Coping With Stress (CWS), a cognitive-behavioral program, or Interpersonal Psychotherapy–Adolescent Skills Training (IPT-AST), an interpersonal program. Some participants received a match between risk and prevention (eg, high cognitive–low interpersonal risk teen in CWS, low cognitive–high interpersonal risk teen in IPT-AST), others received a mismatch (eg, low cognitive-high interpersonal risk teen in CWS). Outcomes were depression diagnoses and symptoms through 18 months postintervention (21 months total). Results: Matched adolescents showed significantly greater decreases in depressive symptoms than mismatched adolescents from postintervention through 18-month follow-up and across the entire 21-month study period (effect size [d] ¼ 0.44, 95% CI ¼ 0.02, 0.86). There was no significant difference in rates of depressive disorders among matched adolescents compared with mismatched adolescents (12.0% versus 18.3%, t193 ¼ .78, p ¼ .44). Conclusion: This study illustrates one approach to personalizing depression prevention as a form of precision mental health. Findings suggest that risk-informed personalization may enhance effects beyond a one-size-fits-all approach. Clinical trial registration information: Bending Adolescent Depression Trajectories Through Personalized Prevention; https://www.clinicaltrials. gov/; NCT01948167
Recommended from our members
Plasma sTNFR1 and IL8 for prognostic enrichment in sepsis trials: a prospective cohort study.
BackgroundEnrichment strategies improve therapeutic targeting and trial efficiency, but enrichment factors for sepsis trials are lacking. We determined whether concentrations of soluble tumor necrosis factor receptor-1 (sTNFR1), interleukin-8 (IL8), and angiopoietin-2 (Ang2) could identify sepsis patients at higher mortality risk and serve as prognostic enrichment factors.MethodsIn a multicenter prospective cohort study of 400 critically ill septic patients, we derived and validated thresholds for each marker and expressed prognostic enrichment using risk differences (RD) of 30-day mortality as predictive values. We then used decision curve analysis to simulate the prognostic enrichment of each marker and compare different prognostic enrichment strategies.Measurements and main resultsAn admission sTNFR1 concentration > 8861 pg/ml identified patients with increased mortality in both the derivation (RD 21.6%) and validation (RD 17.8%) populations. Among immunocompetent patients, an IL8 concentration > 94 pg/ml identified patients with increased mortality in both the derivation (RD 17.7%) and validation (RD 27.0%) populations. An Ang2 level > 9761 pg/ml identified patients at 21.3% and 12.3% increased risk of mortality in the derivation and validation populations, respectively. Using sTNFR1 or IL8 to select high-risk patients improved clinical trial power and efficiency compared to selecting patients with septic shock. Ang2 did not outperform septic shock as an enrichment factor.ConclusionsThresholds for sTNFR1 and IL8 consistently identified sepsis patients with higher mortality risk and may have utility for prognostic enrichment in sepsis trials
Hospital-Based Acute Care Use in Survivors of Septic Shock
OBJECTIVES: Septic shock is associated with increased long-term morbidity and mortality. However, little is known about the use of hospital-based acute care in survivors after hospital discharge. The objectives of the study were to examine the frequency, timing, causes, and risk factors associated with emergency department visits and hospital readmissions within 30 days of discharge.
DESIGN: Retrospective cohort study.
SETTING: Tertiary, academic hospital in the United States.
PATIENTS: Patients admitted with septic shock (serum lactate ≥ 4 mmol/L or refractory hypotension) and discharged alive to a nonhospice setting between 2007 and 2010.
INTERVENTIONS: None.
MEASUREMENTS AND MAIN RESULTS: The coprimary outcomes were all-cause hospital readmission and emergency department visits (treat-and-release encounters) within 30 days to any of the three health system hospitals. Of 269 at-risk survivors, 63 (23.4%; 95% CI, 18.2-28.5) were readmitted within 30 days of discharge and another 12 (4.5%; 95% CI, 2.3-7.7) returned to the emergency department for a treat-and-release visit. Readmissions occurred within 15 days of discharge in 75% of cases and were more likely in oncology patients (p=0.001) and patients with a longer hospital length of stay (p=0.04). Readmissions were frequently due to another life-threatening condition and resulted in death or discharge to hospice in 16% of cases. The reasons for readmission were deemed potentially related to the index septic shock hospitalization in 78% (49 of 63) of cases. The most common cause was infection related, accounting for 46% of all 30-day readmissions, followed by cardiovascular or thromboembolic events (18%).
CONCLUSIONS: The use of hospital-based acute care appeared to be common in septic shock survivors. Encounters often led to readmission within 15 days of discharge, were frequently due to another acute condition, and appeared to result in substantial morbidity and mortality. Given the potential public health implications of these findings, validation studies are needed
The Structure of Critical Care Transfer Networks
Rationale: Moving patients from low-performing hospitals to high-performing hospitals may improve patient outcomes. These transfers may be particularly important in critical care, where small relative improvements can yield substantial absolute changes in survival.
Objective: To characterize the existing critical care network in terms of the pattern of transfers.
Methods: In a retrospective cohort study, the nationwide 2005 Medicare fee-for-service claims were used to identify the interhospital transfer of critically ill patients, defined as instances where patients used critical care services in 2 temporally adjacent hospitalizations.
Measurements: We measured the characteristics of the interhospital transfer network and the extent to which intensive care unit patients are referred to each hospital in that network--a continuous quantitative measure at the hospital-level known as centrality. We evaluated associations between hospital centrality and organizational, medical, surgical, and radiologic capabilities.
Results: There were 47,820 transfers of critically ill patients among 3308 hospitals. 4.5% of all critical care stays of any length involved an interhospital critical care transfer. Hospitals transferred out to a mean of 4.4 other hospitals. More central hospital positions were associated with multiple indicators of increased capability. Hospital characteristics explained 40.7% of the variance in hospitals' centrality.
Conclusions: Critical care transfers are common, and traverse an informal but structured network. The centrality of a hospital is associated with increased capability in delivery of services, suggesting that existing transfers generally direct patients toward better resourced hospitals. Studies of this network promise further improvements in patient outcomes and efficiency of care.
* Note, if you find this of interest, you may also be interested in the follow-up manuscript exploring the determinants and efficiency of the network in targeting transfers for patients with acute myocardial infarction (AMI, aka heart attacks) at http://hdl.handle.net/2027.42/78005Supported in part by NIH grants HL07891-09 and K08 HL09 1249 and an
ATS Fellows Career Development AwardPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/63027/1/09.Medical.Care.Network.Structure.pd
Recommended from our members
Interstitial lung disease in the elderly
Background
Despite the relationship between idiopathic pulmonary fibrosis (IPF) and advancing age, little is known about the epidemiology of interstitial lung disease (ILD) in the elderly. We describe the diagnoses, clinical characteristics, and outcomes of patients who were elderly at the time of ILD diagnosis.
Methods
Among subjects from a prospective cohort study of ILD, elderly was defined as age ≥ 70 years. Diagnoses were derived from a multidisciplinary review. Differences between elderly and nonelderly groups were determined using the χ2 test and analysis of variance.
Results
Of the 327 subjects enrolled, 80 (24%) were elderly. The majority of elderly subjects were white men. The most common diagnoses were unclassifiable ILD (45%), IPF (34%), connective tissue disease (CTD)-ILD (11%), and hypersensitivity pneumonitis (8%). Most elderly subjects (74%) with unclassifiable ILD had an imaging pattern inconsistent with usual interstitial pneumonia (UIP). There were no significant differences in pulmonary function or 3-year mortality between nonelderly and elderly subjects combined or in a subgroup analysis of those with IPF.
Conclusions
Although IPF was the single most common diagnosis, the majority of elderly subjects had non-IPF ILD. Our findings highlight the need for every patient with new-onset ILD, regardless of age, to be surveyed for exposures and findings of CTD. Unclassifiable ILD was common among the elderly, but for most, the radiographic pattern was inconsistent with UIP. Although the effect of ILD may be more pronounced in the elderly due to reduced global functionality, ILD was not more severe or aggressive in this group
Recommended from our members
Chest Fat Quantification via CT Based on Standardized Anatomy Space in Adult Lung Transplant Candidates
Purpose
Overweight and underweight conditions are considered relative contraindications to lung transplantation due to their association with excess mortality. Yet, recent work suggests that body mass index (BMI) does not accurately reflect adipose tissue mass in adults with advanced lung diseases. Alternative and more accurate measures of adiposity are needed. Chest fat estimation by routine computed tomography (CT) imaging may therefore be important for identifying high-risk lung transplant candidates. In this paper, an approach to chest fat quantification and quality assessment based on a recently formulated concept of standardized anatomic space (SAS) is presented. The goal of the paper is to seek answers to several key questions related to chest fat quantity and quality assessment based on a single slice CT (whether in the chest, abdomen, or thigh) versus a volumetric CT, which have not been addressed in the literature.
Methods
Unenhanced chest CT image data sets from 40 adult lung transplant candidates (age 58 ± 12 yrs and BMI 26.4 ± 4.3 kg/m2), 16 with chronic obstructive pulmonary disease (COPD), 16 with idiopathic pulmonary fibrosis (IPF), and the remainder with other conditions were analyzed together with a single slice acquired for each patient at the L5 vertebral level and mid-thigh level. The thoracic body region and the interface between subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the chest were consistently defined in all patients and delineated using Live Wire tools. The SAT and VAT components of chest were then segmented guided by this interface. The SAS approach was used to identify the corresponding anatomic slices in each chest CT study, and SAT and VAT areas in each slice as well as their whole volumes were quantified. Similarly, the SAT and VAT components were segmented in the abdomen and thigh slices. Key parameters of the attenuation (Hounsfield unit (HU) distributions) were determined from each chest slice and from the whole chest volume separately for SAT and VAT components. The same parameters were also computed from the single abdominal and thigh slices. The ability of the slice at each anatomic location in the chest (and abdomen and thigh) to act as a marker of the measures derived from the whole chest volume was assessed via Pearson correlation coefficient (PCC) analysis.
Results
The SAS approach correctly identified slice locations in different subjects in terms of vertebral levels. PCC between chest fat volume and chest slice fat area was maximal at the T8 level for SAT (0.97) and at the T7 level for VAT (0.86), and was modest between chest fat volume and abdominal slice fat area for SAT and VAT (0.73 and 0.75, respectively). However, correlation was weak for chest fat volume and thigh slice fat area for SAT and VAT (0.52 and 0.37, respectively), and for chest fat volume for SAT and VAT and BMI (0.65 and 0.28, respectively). These same single slice locations with maximal PCC were found for SAT and VAT within both COPD and IPF groups. Most of the attenuation properties derived from the whole chest volume and single best chest slice for VAT (but not for SAT) were significantly different between COPD and IPF groups.
Conclusions
This study demonstrates a new way of optimally selecting slices whose measurements may be used as markers of similar measurements made on the whole chest volume. The results suggest that one or two slices imaged at T7 and T8 vertebral levels may be enough to estimate reliably the total SAT and VAT components of chest fat and the quality of chest fat as determined by attenuation distributions in the entire chest volume
Plasma Intercellular Adhesion Molecule-1 and von Willebrand Factor in Primary Graft Dysfunction After Lung Transplantation
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75029/1/j.1600-6143.2007.01981.x.pd
- …