159 research outputs found
Effects of Storage Time on Glycolysis in Donated Human Blood Units
Background: Donated blood is typically stored before transfusions. During storage, the metabolism of red blood cells changes, possibly causing storage lesions. The changes are storage time dependent and exhibit donor-specific variations. It is necessary to uncover and characterize the responsible molecular mechanisms accounting for such biochemical changes, qualitatively and quantitatively; Study Design and Methods: Based on the integration of metabolic time series data, kinetic models, and a stoichiometric model of the glycolytic pathway, a customized inference method was developed and used to quantify the dynamic changes in glycolytic fluxes during the storage of donated blood units. The method provides a proof of principle for the feasibility of inferences regarding flux characteristics from metabolomics data; Results: Several glycolytic reaction steps change substantially during storage time and vary among different fluxes and donors. The quantification of these storage time effects, which are possibly irreversible, allows for predictions of the transfusion outcome of individual blood units; Conclusion: The improved mechanistic understanding of blood storage, obtained from this computational study, may aid the identification of blood units that age quickly or more slowly during storage, and may ultimately improve transfusion management in clinics
Posttransplant Thrombopoiesis Predicts Survival in Patients Undergoing Autologous Hematopoietic Progenitor Cell Transplantation
AbstractThe frequency and clinical significance of secondary thrombocytopenia following initial engraftment in autologous hematopoietic progenitor cell transplantation (HPCT) is unknown. An institutional review board approved retrospective study of thrombopoiesis was performed in 359 patients transplanted with autologous blood (97%) or marrow (3%) who achieved platelet engraftment to >50,000/μL. Idiopathic secondary posttransplant thrombocytopenia (ISPT) was defined as >50% decline in blood platelets to <100,000/μL in the absence of relapse or sepsis. ISPT occurred at a median of day +35 posttransplant in 17% of patients. Patients with ISPT had similar initial platelet engraftment (median 17 days) versus non-ISPT patients (18 days; P = NS) and recovered platelet counts (median 123,00 K/μL) by day 110 posttransplant. Four factors were independently associated with post-transplant death in a multivariate model: disease status at transplant; the number of prior chemotherapy regimens, failure to achieve a platelet count of >150,000/μL posttransplant, and the occurrence of ISPT. A prognostic score was developed based upon the occurrence of ISPT and posttransplant platelet counts of <150,000/μL. Survival of patients with both factors (n = 25) was poor (15% alive at 5 years); patients with 1 factor (n = 145) had 49% 5-year survival; patients with 0 factors (n = 189) had 72% 5-year survival. Patients who failed to achieve a platelet count of >150,000/μL received significantly fewer CD34+ cells/kg (P < .001), whereas patients with ISPT received fewer CD34+CD38− cells/kg (P = .0006). The kinetics of posttransplant thrombopoiesis is an independent prognostic factor for long-term survival following autologous HPC. ISPT and lower initial posttransplant platelet counts reflect poor engraftment with long-term and short-term repopulating CD34+ hematopoietic stem cells, respectively, and are associated with an increased risk of death from disease relapse
Robust Meta-Model for Predicting the Need for Blood Transfusion in Non-traumatic ICU Patients
Objective: Blood transfusions, crucial in managing anemia and coagulopathy in
ICU settings, require accurate prediction for effective resource allocation and
patient risk assessment. However, existing clinical decision support systems
have primarily targeted a particular patient demographic with unique medical
conditions and focused on a single type of blood transfusion. This study aims
to develop an advanced machine learning-based model to predict the probability
of transfusion necessity over the next 24 hours for a diverse range of
non-traumatic ICU patients.
Methods: We conducted a retrospective cohort study on 72,072 adult
non-traumatic ICU patients admitted to a high-volume US metropolitan academic
hospital between 2016 and 2020. We developed a meta-learner and various machine
learning models to serve as predictors, training them annually with four-year
data and evaluating on the fifth, unseen year, iteratively over five years.
Results: The experimental results revealed that the meta-model surpasses the
other models in different development scenarios. It achieved notable
performance metrics, including an Area Under the Receiver Operating
Characteristic (AUROC) curve of 0.97, an accuracy rate of 0.93, and an F1-score
of 0.89 in the best scenario.
Conclusion: This study pioneers the use of machine learning models for
predicting blood transfusion needs in a diverse cohort of critically ill
patients. The findings of this evaluation confirm that our model not only
predicts transfusion requirements effectively but also identifies key
biomarkers for making transfusion decisions
Monitor unit calculations for external photon and electron beams: Report of the AAPM Therapy Physics Committee Task Group No. 71
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/134882/1/mp4244.pd
Interfacing and Verifying ALHAT Safe Precision Landing Systems with the Morpheus Vehicle
The NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project developed a suite of prototype sensors to enable autonomous and safe precision landing of robotic or crewed vehicles under any terrain lighting conditions. Development of the ALHAT sensor suite was a cross-NASA effort, culminating in integration and testing on-board a variety of terrestrial vehicles toward infusion into future spaceflight applications. Terrestrial tests were conducted on specialized test gantries, moving trucks, helicopter flights, and a flight test onboard the NASA Morpheus free-flying, rocket-propulsive flight-test vehicle. To accomplish these tests, a tedious integration process was developed and followed, which included both command and telemetry interfacing, as well as sensor alignment and calibration verification to ensure valid test data to analyze ALHAT and Guidance, Navigation and Control (GNC) performance. This was especially true for the flight test campaign of ALHAT onboard Morpheus. For interfacing of ALHAT sensors to the Morpheus flight system, an adaptable command and telemetry architecture was developed to allow for the evolution of per-sensor Interface Control Design/Documents (ICDs). Additionally, individual-sensor and on-vehicle verification testing was developed to ensure functional operation of the ALHAT sensors onboard the vehicle, as well as precision-measurement validity for each ALHAT sensor when integrated within the Morpheus GNC system. This paper provides some insight into the interface development and the integrated-systems verification that were a part of the build-up toward success of the ALHAT and Morpheus flight test campaigns in 2014. These campaigns provided valuable performance data that is refining the path toward spaceflight infusion of the ALHAT sensor suite
Preparation and Integration of ALHAT Precision Landing Technology for Morpheus Flight Testing
The Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project has developed a suite of prototype sensors for enabling autonomous and safe precision land- ing of robotic or crewed vehicles on solid solar bodies under varying terrain lighting condi- tions. The sensors include a Lidar-based Hazard Detection System (HDS), a multipurpose Navigation Doppler Lidar (NDL), and a long-range Laser Altimeter (LAlt). Preparation for terrestrial ight testing of ALHAT onboard the Morpheus free- ying, rocket-propelled ight test vehicle has been in progress since 2012, with ight tests over a lunar-like ter- rain eld occurring in Spring 2014. Signi cant work e orts within both the ALHAT and Morpheus projects has been required in the preparation of the sensors, vehicle, and test facilities for interfacing, integrating and verifying overall system performance to ensure readiness for ight testing. The ALHAT sensors have undergone numerous stand-alone sensor tests, simulations, and calibrations, along with integrated-system tests in special- ized gantries, trucks, helicopters and xed-wing aircraft. A lunar-like terrain environment was constructed for ALHAT system testing during Morpheus ights, and vibration and thermal testing of the ALHAT sensors was performed based on Morpheus ights prior to ALHAT integration. High- delity simulations were implemented to gain insight into integrated ALHAT sensors and Morpheus GN&C system performance, and command and telemetry interfacing and functional testing was conducted once the ALHAT sensors and electronics were integrated onto Morpheus. This paper captures some of the details and lessons learned in the planning, preparation and integration of the individual ALHAT sen- sors, the vehicle, and the test environment that led up to the joint ight tests
Impact of quality of evidence on the strength of recommendations: an empirical study
<p>Abstract</p> <p>Background</p> <p>Evidence is necessary but not sufficient for decision-making, such as making recommendations by clinical practice guideline panels. However, the fundamental premise of evidence-based medicine (EBM) rests on the assumed link between the quality of evidence and "truth" and/or correctness in making guideline recommendations. If this assumption is accurate, then the quality of evidence ought to play a key role in making guideline recommendations. Surprisingly, and despite the widespread penetration of EBM in health care, there has been no empirical research to date investigating the impact of quality of evidence on the strength of recommendations made by guidelines panels.</p> <p>Methods</p> <p>The American Association of Blood Banking (AABB) has recently convened a 12 member panel to develop clinical practice guidelines (CPG) for the use of fresh-frozen plasma (FFP) for 6 different clinical indications. The panel was instructed that 4 factors should play a role in making recommendation: quality of evidence, uncertainty about the balance between desirable (benefits) and undesirable effects (harms), uncertainty or variability in values and preferences, and uncertainty about whether the intervention represents a wise use of resources (costs). Each member of the panel was asked to make his/her final judgments on the strength of recommendation and the overall quality of the body of evidence. "Voting" was anonymous and was based on the use of GRADE (Grading quality of evidence and strength of recommendations) system, which clearly distinguishes between quality of evidence and strength of recommendations.</p> <p>Results</p> <p>Despite the fact that many factors play role in formulating CPG recommendations, we show that when the quality of evidence is higher, the probability of making a strong recommendation for or against an intervention dramatically increases. Probability of making strong recommendation was 62% when evidence is "moderate", while it was only 23% and 13% when evidence was "low" or "very low", respectively.</p> <p>Conclusion</p> <p>We report the first empirical evaluation of the relationship between quality of evidence pertinent to a clinical question and strength of the corresponding guideline recommendations. Understanding the relationship between quality of evidence and probability of making (strong) recommendation has profound implications for the science of quality measurement in health care.</p
NIH Workshop 2018: Towards Minimally-invasive or Non-invasive Approaches to Assess Tissue Oxygenation Pre- and Post-Transfusion
Because blood transfusion is one of the most common therapeutic interventions in hospitalized patients, much recent research has focused on improving the storage quality in vitro of donor red blood cells (RBCs) that are then used for transfusion. However, there is a significant need for enhancing our understanding of the efficacy of the transfused RBCs in vivo. To this end, the NIH sponsored a one-and-a-half-day workshop that brought together experts in multiple disciplines relevant to tissue oxygenation (e.g., transfusion medicine, critical care medicine, cardiology, neurology, neonatology and pediatrics, bioengineering, biochemistry, and imaging). These individuals presented their latest findings, discussed key challenges, and aimed to construct recommendations for facilitating development of new technologies and/or biomarker panels to assess tissue oxygenation in a minimally-invasive to non-invasive fashion, before and after RBC transfusion.
The workshop was structured into four sessions: (1) Global Perspective; (2) Organ Systems; (3) Neonatology; and (4) Emerging Technologies. The first day provided an overview of current approaches in the clinical setting, both from a global perspective, including the use of metabolomics for studying RBCs and tissue perfusion, and from a more focused perspective, including tissue oxygenation assessments in neonates and in specific adult organ systems. The second day focused on emerging technologies, which could be applied pre- and post-RBC transfusion, to assess tissue oxygenation in minimally-invasive or non-invasive ways. Each day concluded with an open-microphone discussion among the speakers and workshop participants. The workshop presentations and ensuing interdisciplinary discussions highlighted the potential of technologies to combine global “omics” signatures with additional measures (e.g., thenar eminence measurements or various imaging methods) to predict which patients could potentially benefit from a RBC transfusion and whether the ensuing RBC transfusion was effective. The discussions highlighted the need for collaborations across the various disciplines represented at the meeting to leverage existing technologies and to develop novel approaches for assessing RBC transfusion efficacy in various clinical settings.
Although the Workshop took place in April, 2018, the concepts described and the ensuing discussions were, perhaps, even more relevant in April, 2020, at the time of writing this manuscript, during the explosive growth of the COVID-19 pandemic in the United States. Thus, issues relating to maintaining and improving tissue oxygenation and perfusion are especially pertinent because of the extensive pulmonary damage resulting from SARS-CoV-2 infection [1], compromises in perfusion caused by thrombotic-embolic phenomena [2], and damage to circulating RBCs, potentially compromising their oxygen-carrying capacity [3]. The severe end organ effects of SARS-CoV-2 infection mandate even more urgency for improving our understanding of tissue perfusion and oxygenation, improve methods for measuring and monitoring them, and develop novel ways of enhancing them
NIH Workshop 2018: Towards Minimally Invasive or Noninvasive Approaches to Assess Tissue Oxygenation Pre- and Post-transfusion
Because blood transfusion is one of the most common therapeutic interventions in hospitalized patients, much recent research has focused on improving the storage quality in vitro of donor red blood cells (RBCs) that are then used for transfusion. However, there is a significant need for enhancing our understanding of the efficacy of the transfused RBCs in vivo. To this end, the NIH sponsored a one-and-a-half-day workshop that brought together experts in multiple disciplines relevant to tissue oxygenation (eg, transfusion medicine, critical care medicine, cardiology, neurology, neonatology and pediatrics, bioengineering, biochemistry, and imaging). These individuals presented their latest findings, discussed key challenges, and aimed to identify opportunities for facilitating development of new technologies and/or biomarker panels to assess tissue oxygenation in a minimally-invasive to non-invasive fashion, before and after RBC transfusion
- …