32 research outputs found
Nanometer-scale residual crystals in a hot melt extruded amorphous solid dispersion: characterization by transmission electron microscopy
Common characterization techniques used to detect crystallinity in amorphous solid dispersions (ASD) typically have detection or quantification limits on the order of 1%. Herein, an amorphous solid dispersion of indomethacin and polyvinylpyrrolidone/vinyl acetate copolymer produced by hot melt extrusion was determined to be amorphous by powder X-ray diffraction and differential scanning calorimetry. However, through the use of transmission electron microscopy, residual crystals of two populations were identified: single crystals mid-dissolution (<100 nm) and nanocrystalline domains of 5–10 nm in size. Both domain types were observed to contain a high defect density. Polarized light microscopy and scanning electron microscopy techniques supplement these findings by corroborating crystallinity. The use of high resolution analytical techniques to identify and characterize residual crystallinity is considered an important first step to understand the significance of these residual crystalline populations to ASD performance attributes
Multivariable regression analysis of list experiment data on abortion: results from a large, randomly-selected population based study in Liberia
Recommended from our members
Intensive care unit scoring systems outperform emergency department scoring systems for mortality prediction in critically ill patients: a prospective cohort study.
BackgroundMultiple scoring systems have been developed for both the intensive care unit (ICU) and the emergency department (ED) to risk stratify patients and predict mortality. However, it remains unclear whether the additional data needed to compute ICU scores improves mortality prediction for critically ill patients compared to the simpler ED scores.MethodsWe studied a prospective observational cohort of 227 critically ill patients admitted to the ICU directly from the ED at an academic, tertiary care medical center. We compared Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE III, Simplified Acute Physiology Score (SAPS) II, Modified Early Warning Score (MEWS), Rapid Emergency Medicine Score (REMS), Prince of Wales Emergency Department Score (PEDS), and a pre-hospital critical illness prediction score developed by Seymour et al. (JAMA 2010, 304(7):747-754). The primary endpoint was 60-day mortality. We compared the receiver operating characteristic (ROC) curves of the different scores and their calibration using the Hosmer-Lemeshow goodness-of-fit test and visual assessment.ResultsThe ICU scores outperformed the ED scores with higher area under the curve (AUC) values (p = 0.01). There were no differences in discrimination among the ED-based scoring systems (AUC 0.698 to 0.742; p = 0.45) or among the ICU-based scoring systems (AUC 0.779 to 0.799; p = 0.60). With the exception of the Seymour score, the ED-based scoring systems did not discriminate as well as the best-performing ICU-based scoring system, APACHE III (p = 0.005 to 0.01 for comparison of ED scores to APACHE III). The Seymour score had a superior AUC to other ED scores and, despite a lower AUC than all the ICU scores, was not significantly different than APACHE III (p = 0.09). When data from the first 24 h in the ICU was used to calculate the ED scores, the AUC for the ED scores improved numerically, but this improvement was not statistically significant. All scores had acceptable calibration.ConclusionsIn contrast to prior studies of patients based in the emergency department, ICU scores outperformed ED scores in critically ill patients admitted from the emergency department. This difference in performance seemed to be primarily due to the complexity of the scores rather than the time window from which the data was derived
Intensive care unit scoring systems outperform emergency department scoring systems for mortality prediction in critically ill patients: a prospective cohort study.
BACKGROUND:Multiple scoring systems have been developed for both the intensive care unit (ICU) and the emergency department (ED) to risk stratify patients and predict mortality. However, it remains unclear whether the additional data needed to compute ICU scores improves mortality prediction for critically ill patients compared to the simpler ED scores. METHODS:We studied a prospective observational cohort of 227 critically ill patients admitted to the ICU directly from the ED at an academic, tertiary care medical center. We compared Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE III, Simplified Acute Physiology Score (SAPS) II, Modified Early Warning Score (MEWS), Rapid Emergency Medicine Score (REMS), Prince of Wales Emergency Department Score (PEDS), and a pre-hospital critical illness prediction score developed by Seymour et al. (JAMA 2010, 304(7):747-754). The primary endpoint was 60-day mortality. We compared the receiver operating characteristic (ROC) curves of the different scores and their calibration using the Hosmer-Lemeshow goodness-of-fit test and visual assessment. RESULTS:The ICU scores outperformed the ED scores with higher area under the curve (AUC) values (p = 0.01). There were no differences in discrimination among the ED-based scoring systems (AUC 0.698 to 0.742; p = 0.45) or among the ICU-based scoring systems (AUC 0.779 to 0.799; p = 0.60). With the exception of the Seymour score, the ED-based scoring systems did not discriminate as well as the best-performing ICU-based scoring system, APACHE III (p = 0.005 to 0.01 for comparison of ED scores to APACHE III). The Seymour score had a superior AUC to other ED scores and, despite a lower AUC than all the ICU scores, was not significantly different than APACHE III (p = 0.09). When data from the first 24 h in the ICU was used to calculate the ED scores, the AUC for the ED scores improved numerically, but this improvement was not statistically significant. All scores had acceptable calibration. CONCLUSIONS:In contrast to prior studies of patients based in the emergency department, ICU scores outperformed ED scores in critically ill patients admitted from the emergency department. This difference in performance seemed to be primarily due to the complexity of the scores rather than the time window from which the data was derived
Recommended from our members
Impact of interventions and the incidence of ebola virus disease in Liberia-implications for future epidemics
To better understand the impact of national and global efforts to contain the Ebola virus disease epidemic of 2014–15 in Liberia, we provide a detailed timeline of the major interventions and relate them to the epidemic curve.
 In addition to personal experience in the response, we systematically reviewed situation reports from the Liberian government, UN, CDC, WHO, UNICEF, IFRC, USAID, and local and international news reports to create the timeline. We extracted data on the timing and nature of activities and compared them to the timeline of the epidemic curve using the reproduction number—the estimate of the average number of new cases caused by a single case.
 Interventions were organized around five major strategies, with the majority of resources directed to the creation of treatment beds. We conclude that no single intervention stopped the epidemic; rather, the interventions likely had reinforcing effects, and some were less likely than others to have made a major impact. We find that the epidemic’s turning coincided with a reorganization of the response in August–September 2014, the emergence of community leadership in control efforts, and changing beliefs and practices in the population. Ebola Treatment Units were important for Ebola treatment, but the vast majority of these treatment centre beds became available after the epidemic curve began declining. Similarly, the United Nations Mission for Ebola Emergency Response was launched after the epidemic curve had already turned.
 These findings have significant policy implications for future epidemics and suggest that much of the decline in the epidemic curve was driven by critical behaviour changes within local communities, rather than by international efforts that came after the epidemic had turned. Future global interventions in epidemic response should focus on building community capabilities, strengthening local ownership, and dramatically reducing delays in the response