16 research outputs found
Selective Use of Pericardial Window and Drainage as Sole Treatment for Hemopericardium from Penetrating Chest Trauma
Background
Penetrating cardiac injuries (PCIs) are highly lethal, and a sternotomy is considered mandatory for suspected PCI. Recent literature suggests pericardial window (PCW) may be sufficient for superficial cardiac injuries to drain hemopericardium and assess for continued bleeding and instability. This study objective is to review patients with PCI managed with sternotomy and PCW and compare outcomes.
Methods
All patients with penetrating chest trauma from 2000 to 2016 requiring PCW or sternotomy were reviewed. Data were collected for patients who had PCW for hemopericardium managed with only pericardial drain, or underwent sternotomy for cardiac injuries grade 1–3 according to the American Association for the Surgery of Trauma (AAST) Cardiac Organ Injury Scale (OIS). The PCW+drain group was compared with the Sternotomy group using Fisher’s exact and Wilcoxon rank-sum test with P\u3c0.05 considered statistically significant.
Results
Sternotomy was performed in 57 patients for suspected PCI, including 7 with AAST OIS grade 1–3 injuries (Sternotomy group). Four patients had pericardial injuries, three had partial thickness cardiac injuries, two of which were suture-repaired. Average blood drained was 285mL (100–500 mL). PCW was performed in 37 patients, and 21 had hemopericardium; 16 patients proceeded to sternotomy and 5 were treated with pericardial drainage (PCW+drain group). All PCW+drain patients had suction evacuation of hemopericardium, pericardial lavage, and verified bleeding cessation, followed by pericardial drain placement and admission to intensive care unit (ICU). Average blood drained was 240mL (40–600 mL), and pericardial drains were removed on postoperative day 3.6 (2–5). There was no significant difference in demographics, injury mechanism, Revised Trauma Score exploratory laparotomies, hospital or ICU length of stay, or ventilator days. No in-hospital mortality occurred in either group.
Conclusions
Hemodynamically stable patients with penetrating chest trauma and hemopericardium may be safely managed with PCW, lavage and drainage with documented cessation of bleeding, and postoperative ICU monitoring.
Level of evidence
Therapeutic study, level IV
Seasonal Impact in Admissions and Burn Profiles in a Desert Burn Unit
In much of the world, pattern of burn injuries can vary depending on the season. This study examines the seasonal impact of admissions, acuity, mortality, and resource utilization at an accredited burn center located in a desert climate. It is a retrospective analysis of patients admitted from March 1, 2014 to February 28, 2019 for acute burns. Patients were categorized into four groups according to each season based on their date of admission: Spring (March, April, May), Summer (June, July, August), Fall (September, October, November), and Winter (December, January, February).
A total of 1519 patients were included. 1016 were male (66.9%) with an average age 39.6 years. Most admissions came during the Summer (35%), followed by Winter (23%), Spring (21%) and Fall (21%). Most common mechanisms are flame/flash (677, 44.6%), scald (414, 27.3%) and pavement (194, 12.8%). 169 (87.1%) of 194 pavement burn admissions occurred during the Summer. Average hospital LOS was 18.7 days, longest for pavement at 25.8 days, flash/flame 21.9 days and electrical 11.8 days. Log-rank test of LOS showed longer LOS for pavement burns compared to all other etiologies combined. Average daily census was highest in Summer (14.1), followed by Winter (9.8), Spring (8.7) and Fall (7.1).
Summer was the peak season for burn admissions and average daily census in large part from pavement burns. As these burns have a longer LOS, it leads to increase in resource utilization compared to the other mechanisms. Volume of non-pavement burns appear steady throughout the year
Skilled Maneuvering: Evaluation of a Young Driver Advanced Training Program
BACKGROUND: Young drivers (YDs) are disproportionately injured and killed in motor vehicle crashes throughout the United States. Nationally, YDs aged 16 to 20 years constituted nearly 9% of all traffic-related fatalities in 2018. A Nevada Advanced Driver Training (ADT) program for YDs aims to reduce YD traffic injuries and fatalities through four modules taught by professional drivers. The program modules include classroom-based didactic lessons and hands-on driving exercises intended to improve safe driving knowledge and behaviors. The overarching purpose of this study was to determine if theNevada ADT programachieved its objectives for improving safe driving knowledge and behaviors based on program-provided data. A secondary purpose of this study was to provide recommendations to improve programefficiency, delivery, and evaluation. The findings of this study would serve as a basis to develop and evaluate future ADT interventions. METHODS: The exploratorymixedmethods outcome evaluation used secondary data collected during threeweekend events in December 2018 and March 2019. The study population consisted of high school students with a driver’s license or learner’s permit. Pretests/ posttests and preevent questionnaires on student driving history were matched and linked via personal identifiers. The pretests/posttestsmeasured changes in knowledge of safe driving behaviors. This study used descriptive statistics, dependent samples t test, Pearson’s r correlation coefficient, and χ2 (McNemar’s test) with significance set at p = 0.05, 95% confidence interval. Statistical analysis was conducted using IBMSPSS version 24 (Armonk, NY). Qualitative data analysis consisted of content and thematic analysis. RESULTS: Responses from YD participants (N = 649) were provided for analysis. Aggregate YD participant knowledge of safe driving behaviors increased from a mean of 43.9% (pretest) to 74.9% (posttest). CONCLUSION: The program achieved its intended outcomes of improving safe driving knowledge and behaviors among its target population
Evaluating Long-Term Outcomes of a High School-Based Impaired and Distracted Driving Prevention Program
Motor vehicle crashes are one of the leading causes of death among teenagers. Many of these deaths are due to preventable causes, including impaired and distracted driving. You Drink, You Drive, You Lose (YDYDYL) is a prevention program to educate high school students about the consequences of impaired and distracted driving. YDYDYL was conducted at a public high school in Southern Nevada in March 2020. A secondary data analysis was conducted to compare knowledge and attitudes of previous participants with first-time participants. Independent-samples-t test and χ2 test/Fisher’s exact test with post-contingency analysis were used to compare pre-event responses between students who had attended the program one year prior and students who had not. Significance was set at p \u3c 0.05. A total of 349 students participated in the survey and were included for analysis; 177 had attended the program previously (50.7%) and 172 had not (49.3%). The mean age of previous participants and first-time participants was 16.2 (SD ± 1.06 years) and 14.9 (SD ± 0.92 years), respectively. Statistically significant differences in several self-reported baseline behaviors and attitudinal responses were found between the two groups; for example, 47.4% of previous participants compared to 29.4% of first-time participants disagreed that reading text messages only at a stop light was acceptable. Students were also asked how likely they were to intervene if a friend or family member was practicing unsafe driving behaviors; responses were similar between the two groups. The baseline behaviors and attitudes of participants regarding impaired and distracted driving were more protective among previous participants compared to first-time participants, suggesting the program results in long-term positive changes in behaviors and attitudes. The results of this secondary retrospective study may be useful for informing the implementation of future impaired and distracted driving prevention programs
Recommended from our members
Outcomes in patients with gunshot wounds to the brain.
Introduction:Gunshot wounds to the brain (GSWB) confer high lethality and uncertain recovery. It is unclear which patients benefit from aggressive resuscitation, and furthermore whether patients with GSWB undergoing cardiopulmonary resuscitation (CPR) have potential for survival or organ donation. Therefore, we sought to determine the rates of survival and organ donation, as well as identify factors associated with both outcomes in patients with GSWB undergoing CPR. Methods:We performed a retrospective, multicenter study at 25 US trauma centers including dates between June 1, 2011 and December 31, 2017. Patients were included if they suffered isolated GSWB and required CPR at a referring hospital, in the field, or in the trauma resuscitation room. Patients were excluded for significant torso or extremity injuries, or if pregnant. Binomial regression models were used to determine predictors of survival/organ donation. Results:825 patients met study criteria; the majority were male (87.6%) with a mean age of 36.5 years. Most (67%) underwent CPR in the field and 2.1% (n=17) survived to discharge. Of the non-survivors, 17.5% (n=141) were considered eligible donors, with a donation rate of 58.9% (n=83) in this group. Regression models found several predictors of survival. Hormone replacement was predictive of both survival and organ donation. Conclusion:We found that GSWB requiring CPR during trauma resuscitation was associated with a 2.1% survival rate and overall organ donation rate of 10.3%. Several factors appear to be favorably associated with survival, although predictions are uncertain due to the low number of survivors in this patient population. Hormone replacement was predictive of both survival and organ donation. These results are a starting point for determining appropriate treatment algorithms for this devastating clinical condition. Level of evidence:Level II
The pulmonary contusion score: Development of a simple scoring system for blunt lung injury
Background: Pulmonary contusions (PC) are common after blunt chest trauma and can be identified with computed tomography (CT). Complex scoring systems for grading PC exist, however recent scoring systems rely on computer-generated algorithms that are not readily available at all hospitals. We developed a scoring system for grading PC to predict the need for prolonged mechanical ventilation and initial hospital admission location. Methods: A retrospective review was performed of adult blunt trauma patients with PC identified on initial chest CT during 2020. Data elements related to demographics, injury characteristics, disposition and healthcare utilization were extracted. The primary outcome was the need for mechanical ventilation for greater than 48 h. A novel scoring system, the Pulmonary Contusion Score (PCS) was developed. The maximum score was 10, with each lobe contributing up to 2 points. A score of 0 was given for no contusion present in the lobe, 1 for less than 50 % contusion, and 2 for greater than 50 % contusion. A PCS of 4 was hypothesized to correlate with need for mechanical ventilation for over 48 h. A confusion matrix of the scoring algorithm was created, and inter-rater concordance was calculated from a randomly selected 125 patients. Results: A total of 217 patients were identified. 118 patients (54 %) were admitted to the ICU, but only 23 patients (19 %) were intubated, and only 17 patients (8 %) required mechanical ventilation > 48 h. Sensitivity of the scoring system was 20 %, while specificity was 93 %. Negative predictive value was 93 %. Inter-rater agreement was 77 %. Conclusion: The PCS is a scoring system with high specificity and negative predictive value that can be used to evaluate the need for mechanical ventilation after sustaining blunt PC and can help properly allocate hospital resources. Level of evidence: IV - diagnostic criteri
Recommended from our members
Severe hypernatremia in deceased liver donors does not impact early transplant outcome
There may be an increased risk of primary nonfunction in livers procured from donors with hypernatremia. The purported mechanism for this effect is undefined. This study analyzes early graft function for donor livers procured from patients with severe hypernatremia.
The organ procurement records for 1013 consecutive deceased liver donors between 2001 and 2008 were reviewed. Both peak and terminal serum sodium levels were categorized as (1) severe for a level 170 mEq/L or higher, (2) moderate for 160 to 169 mEq/L, and (3) normal for less than 160 mEq/L. Outcomes included 30-day posttransplant alanine aminotransferase and total bilirubin, primary nonfunction, and 30-day and 1-year graft survival.
Within the severe hypernatremia group, there were 142 (peak) and 50 (terminal) donors, whereas the moderate group had 233 (peak) and 162 (terminal) donors. The study groups did not differ in recipient age, model for end-stage liver disease score, steatosis, and ischemia times for the peak or terminal serum sodium groups. The differing levels of hypernatremia severity did not differ importantly, for peak or terminal serum sodium, in posttransplant alanine aminotransferase or total bilirubin, or the risk of intraoperative death and primary nonfunction. Thirty-day and 1-year graft survival did not demonstrate a negative impact from donor hypernatremia.
Posttransplant measures of early liver function and risk of failure, up to 1-year posttransplant, did not differ significantly based on peak or terminal donor serum sodium levels. These results suggest that donor serum sodium level likely has little clinical impact on posttransplant liver function
Recommended from our members
Comparison of histidine‐tryptophan‐ketoglutarate solution and University of Wisconsin solution in extended criteria liver donors
Liver, pancreas, and kidney allografts preserved in histidine‐tryptophan‐ketoglutarate (HTK) and University of Wisconsin (UW) solutions have similar clinical outcomes. This study compares HTK and UW in a large number of standard criteria donor (SCD) and extended criteria donor (ECD) livers at a single center over 5 years. All adult, cadaveric liver and liver‐kidney transplants performed between July 1, 2001 and June 30, 2006 were reviewed (n = 698). There were 435 livers (62%) categorized as ECD for severe physiologic stress and 70 (10%) because of old age. Recipient outcomes included perioperative death or graft loss and overall survival. Liver enzymes were analyzed for the first month post‐transplant. Biliary complications were assessed through chart review. Overall, 371 donor livers were preserved in HTK (53%), and 327 were preserved in UW (47%). There were no statistically significant differences in any of the primary outcome measures comparing HTK and UW. The HTK group overall had a higher day 1 median aspartate aminotransferase and alanine aminotransferase, but the two groups were similar in function thereafter. HTK was superior to UW in protection against biliary complications. Kaplan‐Meier graft survival curves failed to demonstrate a significant difference in SCD or ECD livers. In conclusion, HTK and UW are not clinically distinguishable in this large sample of liver transplants, although HTK may be protective against biliary complications when compared to UW. These findings persisted for both SCD and ECD livers. Given the lower cost per donor for HTK, this preservation solution may be preferable for general use. Liver Transpl 14:365–373, 2008. © 2008 AASLD
Recommended from our members
No difference in clinical transplant outcomes for local and imported liver allografts
In the United States, liver allograft allocation is strictly regulated. Local centers have the first option to accept a donor liver; this is followed by regional allocation for those donor livers not used locally and then by national allocation for those donor livers not accepted regionally. This study reviews the outcomes of all liver allografts used over 6 years (2001‐2007) and evaluates initial and long‐term function stratified by the geographic source of the donor liver allograft. The records for 845 consecutive deceased donor liver transplants at a single center were reviewed. The geographic origin of the allograft was recorded along with donor and graft characteristics to determine the probable reason for graft refusal. Within our local organ procurement organization, there is 1 liver transplant center, and within the region, there are 8 active centers. Early graft failure included any graft loss within 7 days of transplant, and initial function was measured with liver enzymes 30 days post‐transplant. Graft survival and patient survival were evaluated with Kaplan‐Meier and Cox survival modeling. Median follow‐up was 43 months. The geographic distribution of organs included local organs (562, 66%), regionally imported organs (126, 15%), and nationally imported organs (157, 19%). There were no differences between the 3 groups in initial graft function, intraoperative death, or early graft loss. Survival curves for the 3 study groups demonstrated no difference in survival up to 5 years post‐transplant. In conclusion, liver allografts rejected for use by a large number of transplant centers can still be successfully used without early graft function or long‐term survival being affected. Liver Transpl 15:640–647, 2009. © 2009 AASLD