16 research outputs found

    intracorporeal heat Distribution from Fully implantable energy sources for Mechanical circulatory support: a computational Proof-of-concept study

    Get PDF
    Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid-solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology

    Role of senescence marker p16INK4a measured in peripheral blood T-lymphocytes in predicting length of hospital stay after coronary artery bypass surgery in older adults

    Get PDF
    Adults older than 65 years undergo more than 120,000 coronary artery bypass (CAB) procedures each year in the United States. Chronological age alone, though commonly used in prediction models of outcomes after CAB, does not alone reflect variability in aging process; thus, the risk of complications in older adults. We performed a prospective study to evaluate a relationship between senescence marker p16INK4a expression in peripheral blood T-lymphocytes (p16 levels in PBTLs) with aging and with perioperative outcomes in older CAB patients. We included 55 patients age 55 and older, who underwent CAB in Johns Hopkins Hospital between September 1st, 2010 and March 25th, 2013. Demographic, clinical and laboratory data following outline of the Society of Thoracic Surgeons data collection form was collected, and p16 mRNA levels in PBTLs were measured using Taqman® qRT-PCR. Associations between p16 mRNA levels in PBTLs with length of hospital stay, frailty status, p16 protein levels in the aortic and left internal mammary artery tissue, cerebral oxygen saturation, and augmentation index as a measure of vascular stiffness were measured using regression analyses. Length of hospital stay was the primary outcome of interest, and major organ morbidity, mortality, and discharge to a skilled nursing facility were secondary outcomes. In secondary analysis, we evaluated associations between p16 mRNA levels in PBTLs and interleukin-6 levels using regression analyses. Median age of enrolled patients was 63.5 years (range 56-81 years), they were predominantly male (74.55%), of Caucasian descent (85.45%). Median log2(p16 levels in PBTLs) were 4.71 (range 1.10-6.82). P16 levels in PBTLs were significantly associated with chronological age (mean difference 0.06 for each year increase in age, 95% CI 0.01-0.11) and interleukin 6 levels (mean difference 0.09 for each pg/ml increase in IL-6 levels, 95% CI 0.01-0.18). There were no significant associations with frailty status, augmentation index, cerebral oxygenation and p16 protein levels in blood vessels. Increasing p16 levels in PBTLs did not predict length of stay in the hospital (HR 1.10, 95% CI 0.87-1.40) or intensive care unit (HR 1.02, 95% CI 0.79-1.32). Additional evaluation of p16 levels in PBTLs as predictor of perioperative outcomes is required and should include additional markers of immune system aging as well as different outcomes after CAB in addition to length of hospital stay

    National study on the distribution, causes, and consequences of voluntarily reported medication errors between the ICU and non-ICU settings

    No full text
    Objective: To compare the distribution, causes, and consequences of medication errors in the ICU with those in non-ICU settings.Design: : A cross-sectional study of all hospital ICU and non-ICU medication errors reported to the MEDMARX system between 1999 and 2005. Adjusted odds ratios are presented.Setting: Hospitals participating in the MEDMARX reporting system.Interventions: None.Measurements and main results: MEDMARX is an anonymous, self-reported, confidential, deidentified, internet-accessible medication error reporting program that allows hospitals to report, track, and share medication error data. There were 839,553 errors reported from 537 hospitals. ICUs accounted for 55,767 (6.6%) errors, of which 2,045 (3.7%) were considered harmful. Non-ICUs accounted for 783,800 (93.4%) errors, of which 14,471 (1.9%) were harmful. Errors most often originated in the administration phase (ICU 44% vs. non-ICU 33%; odds ratio 1.63 [1.43-1.86]). The most common error type was omission (ICU 26% vs. non-ICU 28%; odds ratio 1.00 [0.91-1.10]). Among harmful errors, dispensing devices (ICU 14% vs. non-ICU 7.1%; odds ratio 2.09 [1.69-2.59]) and calculation mistakes (ICU 9.8% vs. non-ICU 5.3%; odds ratio 1.82 [1.48-2.24]) were more commonly identified to be the cause in the ICU compared to the non-ICU setting. ICU errors were more likely to be associated with any harm (odds ratio 1.89 [1.62-2.17]), permanent harm (odds ratio 2.45 [1.17-5.13]), harm requiring life-sustaining intervention (odds ratio 2.91 [1.86-4.56]), or death (odds ratio 2.48 [1.18-5.19]). When an error did occur, patients and their caregivers were rarely informed (ICU 1.5% vs. non-ICU 2.1%; odds ratio 0.63 [0.48-0.84]) by the time of reporting.Conclusions: More harmful errors are reported in ICU than non-ICU settings. Medication errors occur frequently in the administration phase in the ICU. When errors occur, patients and their caregivers are rarely informed. Consideration should be given to developing additional safeguards against ICU errors, particularly during drug administration, and eliminating barriers to error disclosures

    P-217 Laboratory predictors of massive transfusion in liver transplantation: Single-center experience

    No full text
    Purpose: We evaluated whether routinely performed laboratory tests can predict massive transfusion defined as transfusion of 10 or more units of packed Red Blood Cells (pRBCs) during liver transplantation.Methods: We extracted laboratory values and blood utilization data from the electronic hospital databases, on all adult deceased donor liver transplant recipients during 04/10-12/13 period. Regression analyses were performed to identify predictors of massive transfusion.Results: We identified 167 recipients; 28.7% received massive transfusion. Preoperative laboratory values are summarized below Table.1Recipients received median of 5 (interquartile range (IQR) 2-12) units of pRBCs, median of 6 (IQR 2-12) units of Fresh Frozen Plasma; 59.3% received platelets, and 40.7% received cryoprecipitate; 16.8% received salvaged blood. In a univariate logistic regression the following laboratory tests predicted massive transfusion: hemoglobin (OR 0.65 for each 1 g/dL increase in hemoglobin; 95% CI 0.51-0.81), platelets (OR 0.91 for each 10,000/ml increase in platelet count; 95% CI 0.84-0.98), and Thromboelastography R-time (OR 1.25 for each 1 second increase in R-time; 95% CI 1.04-1.50). In multivariable logistic regression hemoglobin (OR 0.61, 95% CI 0.46- 0.80), platelets (OR 0.89, 95% CI 0.81-0.99), R-time (OR 1.37, 95% CI 1.08-1.78) and K-time (OR 0.66, 95% CI 0.46-0.97) were significant, with final model AUC=0.79.Conclusions: Preoperative hemoglobin, platelets level, R-time and K-time are predictors of massive blood transfusion, however, they do not account for entire variability in data

    Evaluation of EEG β2 / θ -ratio and channel locations in measuring anesthesia depth

    No full text
    In this paper, the ratio of powers in the frequency bands of β2 and θ waves in EEG signals (termed as the β2 /θ -ratio) was introduced as a potential enhancement in measuring anesthesia depth. The β2 / θ -ratio was compared to the relative β -ratio which had been commercially used in the BIS monitor.Sensitivity and reliability of the β2 / θ -ratio and EEG measurement locations were analyzed for their effectiveness in measuring anesthesia depth during different stages of propofol induced anesthesia (awake, induction, maintenance, and emergence). The analysis indicate that 1) the relative β -ratio and β2 / θ -ratio derived from the prefrontal, frontal, and the central cortex EEG signals were of substantial sensitivity for capturing anesthesia depth changes. 2) Certain channel positions in the frontal part of the cortex, such as,F4 had the combined benefits of substantial sensitivity and noise resistance. 3) The β2 / θ-ratio captured the initial excitation, while the relative β -ratio did not. 4) In the maintenance and emergence stages, the β2 /θ -ratio showed improved reliability. Implications: The ratio of powers in EEG frequency bands β2 and θ derived from the frontal cortex EEG channels has combined benefits of substantial sensitivity and noise resistance in measuring anesthesia depth

    Predictive modeling of massive transfusion requirements during liver transplantation and its potential to reduce utilization of blood bank resources

    No full text
    Background: Patients undergoing liver transplantation frequently but inconsistently require massive blood transfusion. The ability to predict massive transfusion (MT) could reduce the impact on blood bank resources through customization of the blood order schedule. Current predictive models of MT for blood product utilization during liver transplantation are not generally applicable to individual institutions owing to variability in patient population, intraoperative management, and definitions of MT. Moreover, existing models may be limited by not incorporating cirrhosis stage or thromboelastography (TEG) parameters.Methods: This retrospective cohort study included all patients who underwent deceased-donor liver transplantation at the Johns Hopkins Hospital between 2010 and 2014. We defined MT as intraoperative transfusion of \u3e 10 units of packed red blood cells (pRBCs) and developed a multivariable predictive model of MT that incorporated cirrhosis stage and TEG parameters. The accuracy of the model was assessed with the goodness-of-fit test, receiver operating characteristic analysis, and bootstrap resampling. The distribution of correct patient classification was then determined as we varied the model threshold for classifying MT. Finally, the potential impact of these predictions on blood bank resources was examined.Results: Two hundred three patients were included in the study. Sixty (29.6%) patients met the definition for MT and received a median (interquartile range) of 19.0 (14.0-27.0) pRBC units intraoperatively compared with 4.0 units (1.0-6.0) for those who did not satisfy the criterion for MT. The multivariable model for predicting MT included Model for End-stage Liver Disease score, whether simultaneous liver and kidney transplant was performed, cirrhosis stage, hemoglobin concentration, platelet concentration, and TEG R interval and angle. This model demonstrated good calibration (Hosmer-Lemeshow goodness-of-fit test P = .45) and good discrimination (c statistic: 0.835; 95% confidence interval, 0.781-0.888). A probability cutoff threshold of 0.25 was found to misclassify only 4 of 100 patients as unlikely to experience MT, with the majority such misclassifications within 4 units of the working definition for MT. For this threshold, a preoperative blood ordering schedule that allocated 6 units of pRBCs for those unlikely to experience MT and 15 for those who were likely to experience MT would prevent unnecessary crossmatching of 338 units/100 transplants.Conclusions: When clinical and laboratory parameters are included, a model predicting intraoperative MT in patients undergoing liver transplantation is sufficiently accurate that its predictions could guide the blood order schedule for individual patients based on institutional data, thereby reducing the impact on blood bank resources. Ongoing evaluation of model accuracy and transfusion practices is required to ensure continuing performance of the predictive model

    Validation of predictive models identifying patients at risk for massive transfusion during liver transplantation and their potential impact on blood bank resource utilization

    No full text
    Background: Intraoperative massive transfusion (MT) is common during liver transplantation (LT). A predictive model of MT has the potential to improve use of blood bank resources.Study design and methods: Development and validation cohorts were identified among deceased-donor LT recipients from 2010 to 2016. A multivariable model of MT generated from the development cohort was validated with the validation cohort and refined using both cohorts. The combined cohort also validated the previously reported McCluskey risk index (McRI). A simple modified risk index (ModRI) was then created from the combined cohort. Finally, a method to translate model predictions to a population-specific blood allocation strategy was described and demonstrated for the study population.Results: Of the 403 patients, 60 (29.6%) in the development and 51 (25.5%) in the validation cohort met the definition for MT. The ModRI, derived from variables incorporated into multivariable model, ranged from 0 to 5, where 1 point each was assigned for hemoglobin level of less than 10 g/dL, platelet count of less than 100 × 109 /dL, thromboelastography R interval of more than 6 minutes, simultaneous liver and kidney transplant and retransplantation, and a ModRI of more than 2 defined recipients at risk for MT. The multivariable model, McRI, and ModRI demonstrated good discrimination (c statistic [95% CI], 0.77 [0.70-0.84]; 0.69 [0.62-0.76]; and 0.72 [0.65-0.79], respectively, after correction for optimism). For blood allocation of 6 or 15 units of red blood cells (RBCs) based on risk of MT, the ModRI would prevent unnecessary crossmatching of 300 units of RBCs/100 transplants.Conclusions: Risk indices of MT in LT can be effective for risk stratification and reducing unnecessary blood bank resource utilization

    Prophylactic recombinant factor VIIa for preventing massive transfusion during orthotopic liver transplantation

    No full text
    Objectives: Recombinant human activated factor VIIa has been used prophylactically to mitigate requirements for transfusion in liver transplant. We explored its effectiveness and risks among liver transplant recipients at high risk for massive transfusion. Materials and Methods: We performed a retrospective study of recipients who underwent liver transplant from 2012 to 2015. Patients considered at risk for massive transfusion received up to two 20 μg/kg doses of recombinant human activated factor VIIa, with rescue use permitted for other patients. We used propensity matching to determine the average treatment effects on patients who received recombinant human activated factor VIIa prophylactically to prevent massive transfusion. We determined thromboembolic events from medical record review. Results: Of 234 liver transplant recipients, 38 received prophylactic and 2 received rescue recombinant human activated factor VIIa. We used a prediction model to readily identify those who would receive prophylactic recombinant human activated factor VIIa (C statistic = 0.885; 95% CI, 0.835-0.935). Propensity matching achieved balance, particularly for massive transfusion. Twenty-three of 38 patients (60.5%) who received recombinant human activated factor VIIa and 47 of 76 matched controls (61.8%) experienced massive transfusion. The coefficient for the average treatment effect of prophylactic administration was -0.013 (95% CI, -0.260 to 0.233; P = .92). The cohorts exhibited no difference in number of thromboembolic events (P \u3e .99), although fatal events occurred in 1 patient who had prophylactic and 1 patient who had rescue recombinant human activated factor VIIa. Conclusions: Prophylactic recombinant human activated factor VIIa use in patients at elevated risk of massive transfusion did not affect incidence of massive transfusion and was not associated with an increase in thromboembolic events overall. The lack of clinical benefit and the potential for fatal throm-boembolic events observed with recombinant human activated factor VIIa precluded its prophylactic use in liver transplant recipients
    corecore