10 research outputs found
A Randomized Trial Comparing Digital and Live Lecture Formats
Problem Statement and Background – Medical education is increasingly being conducted in community-based teaching sites making it difficult to provide a consistent curriculum. We conducted a randomized trial to assess whether digital lectures could replace live lectures.
Methods – Students were randomized to either attending a lecture series at our main campus or viewing digital versions of the same lectures at community sites. Both groups completed an examination based on the lectures and the group viewing the digital lectures completed a feedback form.
Results – The group who viewed the digital lectures performed slightly better than the live lecture group however the differences were not statistically significant. Despite technical problems the students who viewed the digital lectures overwhelmingly felt the digital lectures could replace live lectures.
Conclusions – Digital lectures appear to be a viable alternative to live lectures as a means of delivering didactic presentations in a community-based setting
Fitting the Means to the Ends: One School’s Experience with Quantitative and Qualitative Methods in Curriculum Evaluation During Curriculum Change
Curriculum evaluation plays an important role in substantive curriculum change. The experience of the University of Texas Medical Branch (UTMB) with evaluation processes developed for the new Integrated Medical Curriculum (IMC) illustrates how evaluation methods may be chosen to match the goals of the curriculum evaluation process. Quantitative data such as ratings of courses or scores on external exams are useful for comparing courses or assessing whether standards have been met. Qualitative data such as students’ comments about aspects of courses are useful for eliciting explanations of observed phenomena and describing relationships between curriculum features and outcomes. The curriculum evaluation process designed for the IMC used both types of evaluation methods in a complementary fashion. Quantitative and qualitative methods have been used for formative evaluation of the new IMC courses. They are now being incorporated into processes to judge the IMC against its goals and objectives
Independent and combined effects of improved water, sanitation, and hygiene, and improved complementary feeding, on child stunting and anaemia in rural Zimbabwe: a cluster-randomised trial.
BACKGROUND: Child stunting reduces survival and impairs neurodevelopment. We tested the independent and combined effects of improved water, sanitation, and hygiene (WASH), and improved infant and young child feeding (IYCF) on stunting and anaemia in in Zimbabwe. METHODS: We did a cluster-randomised, community-based, 2 × 2 factorial trial in two rural districts in Zimbabwe. Clusters were defined as the catchment area of between one and four village health workers employed by the Zimbabwe Ministry of Health and Child Care. Women were eligible for inclusion if they permanently lived in clusters and were confirmed pregnant. Clusters were randomly assigned (1:1:1:1) to standard of care (52 clusters), IYCF (20 g of a small-quantity lipid-based nutrient supplement per day from age 6 to 18 months plus complementary feeding counselling; 53 clusters), WASH (construction of a ventilated improved pit latrine, provision of two handwashing stations, liquid soap, chlorine, and play space plus hygiene counselling; 53 clusters), or IYCF plus WASH (53 clusters). A constrained randomisation technique was used to achieve balance across the groups for 14 variables related to geography, demography, water access, and community-level sanitation coverage. Masking of participants and fieldworkers was not possible. The primary outcomes were infant length-for-age Z score and haemoglobin concentrations at 18 months of age among children born to mothers who were HIV negative during pregnancy. These outcomes were analysed in the intention-to-treat population. We estimated the effects of the interventions by comparing the two IYCF groups with the two non-IYCF groups and the two WASH groups with the two non-WASH groups, except for outcomes that had an important statistical interaction between the interventions. This trial is registered with ClinicalTrials.gov, number NCT01824940. FINDINGS: Between Nov 22, 2012, and March 27, 2015, 5280 pregnant women were enrolled from 211 clusters. 3686 children born to HIV-negative mothers were assessed at age 18 months (884 in the standard of care group from 52 clusters, 893 in the IYCF group from 53 clusters, 918 in the WASH group from 53 clusters, and 991 in the IYCF plus WASH group from 51 clusters). In the IYCF intervention groups, the mean length-for-age Z score was 0·16 (95% CI 0·08-0·23) higher and the mean haemoglobin concentration was 2·03 g/L (1·28-2·79) higher than those in the non-IYCF intervention groups. The IYCF intervention reduced the number of stunted children from 620 (35%) of 1792 to 514 (27%) of 1879, and the number of children with anaemia from 245 (13·9%) of 1759 to 193 (10·5%) of 1845. The WASH intervention had no effect on either primary outcome. Neither intervention reduced the prevalence of diarrhoea at 12 or 18 months. No trial-related serious adverse events, and only three trial-related adverse events, were reported. INTERPRETATION: Household-level elementary WASH interventions implemented in rural areas in low-income countries are unlikely to reduce stunting or anaemia and might not reduce diarrhoea. Implementation of these WASH interventions in combination with IYCF interventions is unlikely to reduce stunting or anaemia more than implementation of IYCF alone. FUNDING: Bill & Melinda Gates Foundation, UK Department for International Development, Wellcome Trust, Swiss Development Cooperation, UNICEF, and US National Institutes of Health.The SHINE trial is funded by the Bill & Melinda Gates Foundation (OPP1021542 and OPP113707); UK Department for International Development; Wellcome Trust, UK (093768/Z/10/Z, 108065/Z/15/Z and 203905/Z/16/Z); Swiss Agency for Development and Cooperation; US National Institutes of Health (2R01HD060338-06); and UNICEF (PCA-2017-0002)
Investigating Determinants and Evaluating Deep Learning Training Approaches for Visual Acuity in Foveal Hypoplasia
Purpose: To describe the relationships between foveal structure and visual function in a cohort of individuals with foveal hypoplasia (FH) and to estimate FH grade and visual acuity using a deep learning classifier. Design: Retrospective cohort study and experimental study. Participants: A total of 201 patients with FH were evaluated at the National Eye Institute from 2004 to 2018. Methods: Structural components of foveal OCT scans and corresponding clinical data were analyzed to assess their contributions to visual acuity. To automate FH scoring and visual acuity correlations, we evaluated the following 3 inputs for training a neural network predictor: (1) OCT scans, (2) OCT scans and metadata, and (3) real OCT scans and fake OCT scans created from a generative adversarial network. Main Outcome Measures: The relationships between visual acuity outcomes and determinants, such as foveal morphology, nystagmus, and refractive error. Results: The mean subject age was 24.4 years (range, 1–73 years; standard deviation = 18.25 years) at the time of OCT imaging. The mean best-corrected visual acuity (n = 398 eyes) was equivalent to a logarithm of the minimal angle of resolution (LogMAR) value of 0.75 (Snellen 20/115). Spherical equivalent refractive error (SER) ranged from −20.25 diopters (D) to +13.63 D with a median of +0.50 D. The presence of nystagmus and a high-LogMAR value showed a statistically significant relationship (P 0.85 and > 0.70, respectively) for a test cohort of 37 individuals (98 OCT scans). Training the predictor on real OCT scans with metadata and fake OCT scans improved the accuracy over the model trained on real OCT scans alone. Conclusions: Nystagmus and foveal anatomy impact visual outcomes in patients with FH, and computational algorithms reliably estimate FH grading and visual acuity
Prognostic Significance of Hemodynamic Parameters in Patients with Cardiogenic Shock.
BACKGROUND: Invasive hemodynamic assessment with a pulmonary artery catheter is often used to guide management of patients with cardiogenic shock (CS) and may provide important prognostic information. We aimed to assess prognostic associations and relationships to end-organ dysfunction of presenting hemodynamic parameters in CS.
METHODS: The Critical Care Cardiology Trials Network (CCCTN) is an investigator-initiated multicenter registry of cardiac intensive care units (CICUs) in North America coordinated by the TIMI Study Group. Patients with CS (2018-2022) who underwent invasive hemodynamic assessment within 24 hours of CICU admission were included. Associations of hemodynamic parameters with in-hospital mortality were assessed using logistic regression, and associations with presenting serum lactate were assessed using least squares means regression. Sensitivity analyses were performed excluding patients on temporary mechanical circulatory support and adjusted for vasoactive-inotropic score.
RESULTS: Among the 3,603 admissions with CS, 1,473 had hemodynamic data collected within 24 hours of CICU admission. Median cardiac index was 1.9 (IQR, 1.6-2.4) L/min/m2 and mean arterial pressure (MAP) was 74 (66-86) mmHg. Parameters associated with mortality included low MAP, low systolic blood pressure, low systemic vascular resistance, elevated right atrial pressure (RAP), elevated RAP/pulmonary capillary wedge pressure ratio, and low pulmonary artery pulsatility index. These associations were generally consistent when controlling for intensity of background pharmacologic and mechanical hemodynamic support. These parameters were also associated with higher presenting serum lactate.
CONCLUSIONS: In a contemporary CS population, presenting hemodynamic parameters reflecting decreased systemic arterial tone and indicators of right ventricular dysfunction are associated with adverse outcomes and presenting lactate
Pulmonary Artery Catheter Use and Mortality in the Cardiac Intensive Care Unit.
BACKGROUND: The appropriate use of pulmonary artery catheters (PACs) in critically ill cardiac patients remains debated.
OBJECTIVES: The authors aimed to characterize the current use of PACs in cardiac intensive care units (CICUs) with attention to patient-level and institutional factors influencing their application and explore the association with in-hospital mortality.
METHODS: The Critical Care Cardiology Trials Network is a multicenter network of CICUs in North America. Between 2017 and 2021, participating centers contributed annual 2-month snapshots of consecutive CICU admissions. Admission diagnoses, clinical and demographic data, use of PACs, and in-hospital mortality were captured.
RESULTS: Among 13,618 admissions at 34 sites, 3,827 were diagnosed with shock, with 2,583 of cardiogenic etiology. The use of mechanical circulatory support and heart failure were the patient-level factors most strongly associated with a greater likelihood of the use of a PAC (OR: 5.99 [95% CI: 5.15-6.98]; P \u3c 0.001 and OR: 3.33 [95% CI: 2.91-3.81]; P \u3c 0.001, respectively). The proportion of shock admissions with a PAC varied significantly by study center ranging from 8% to 73%. In analyses adjusted for factors associated with their placement, PAC use was associated with lower mortality in all shock patients admitted to a CICU (OR: 0.79 [95% CI: 0.66-0.96]; P = 0.017).
CONCLUSIONS: There is wide variation in the use of PACs that is not fully explained by patient level-factors and appears driven in part by institutional tendency. PAC use was associated with higher survival in cardiac patients with shock presenting to CICUs. Randomized trials are needed to guide the appropriate use of PACs in cardiac critical care
Clinical Practice Patterns in Temporary Mechanical Circulatory Support for Shock in the Critical Care Cardiology Trials Network (CCCTN) Registry.
BACKGROUND: Temporary mechanical circulatory support (MCS) devices provide hemodynamic assistance for shock refractory to pharmacological treatment. Most registries have focused on single devices or specific etiologies of shock, limiting data regarding overall practice patterns with temporary MCS in cardiac intensive care units.
METHODS: The CCCTN (Critical Care Cardiology Trials Network) is a multicenter network of tertiary CICUs in North America. Between September 2017 and September 2018, each center (n=16) contributed a 2-month snapshot of consecutive medical CICU admissions.
RESULTS: Of the 270 admissions using temporary MCS, 33% had acute myocardial infarction-related cardiogenic shock (CS), 31% had CS not related to acute myocardial infarction, 11% had mixed shock, and 22% had an indication other than shock. Among all 585 admissions with CS or mixed shock, 34% used temporary MCS during the CICU stay with substantial variation between centers (range: 17%-50%). The most common temporary MCS devices were intraaortic balloon pumps (72%), Impella (17%), and veno-arterial extracorporeal membrane oxygenation (11%), although intraaortic balloon pump use also varied between centers (range: 40%-100%). Patients managed with intraaortic balloon pump versus other forms of MCS (advanced MCS) had lower Sequential Organ Failure Assessment scores and less severe metabolic derangements. Illness severity was similar at high- versus low-MCS utilizing centers and at centers with more advanced MCS use.
CONCLUSIONS: There is wide variation in the use of temporary MCS among patients with shock in tertiary CICUs. While hospital-level variation in temporary MCS device selection is not explained by differences in illness severity, patient-level variation appears to be related, at least in part, to illness severity
Burden of Vision Loss in the Eastern Mediterranean Region, 1990–2015: Findings from the Global Burden of Disease 2015 Study
Objectives
To report the estimated trend in prevalence and years lived with disability (YLDs) due to vision loss (VL) in the Eastern Mediterranean region (EMR) from 1990 to 2015.
Methods
The estimated trends in age-standardized prevalence and the YLDs rate due to VL in 22 EMR countries were extracted from the Global Burden of Disease (GBD) 2015 study. The association of Socio-demographic Index (SDI) with changes in prevalence and YLDs of VL was evaluated using a multilevel mixed model.
Results
The age-standardized prevalence of VL in the EMR was 18.2% in 1990 and 15.5% in 2015. The total age-standardized YLDs rate attributed to all-cause VL in EMR was 536.9 per 100,000 population in 1990 and 482.3 per 100,000 population in 2015. For each 0.1 unit increase in SDI, the age-standardized prevalence and YLDs rate of VL showed a reduction of 1.5% (p < 0.001) and 23.9 per 100,000 population (p < 0.001), respectively.
Conclusions
The burden of VL is high in the EMR; however, it shows a descending trend over the past 25 years. EMR countries need to establish comprehensive eye care programs in their health care systems
Sparsentan in patients with IgA nephropathy: a prespecified interim analysis from a randomised, double-blind, active-controlled clinical trial
Background: Sparsentan is a novel, non-immunosuppressive, single-molecule, dual endothelin and angiotensin receptor antagonist being examined in an ongoing phase 3 trial in adults with IgA nephropathy. We report the prespecified interim analysis of the primary proteinuria efficacy endpoint, and safety. Methods: PROTECT is an international, randomised, double-blind, active-controlled study, being conducted in 134 clinical practice sites in 18 countries. The study examines sparsentan versus irbesartan in adults (aged ≥18 years) with biopsy-proven IgA nephropathy and proteinuria of 1·0 g/day or higher despite maximised renin-angiotensin system inhibitor treatment for at least 12 weeks. Participants were randomly assigned in a 1:1 ratio to receive sparsentan 400 mg once daily or irbesartan 300 mg once daily, stratified by estimated glomerular filtration rate at screening (30 to 1·75 g/day). The primary efficacy endpoint was change from baseline to week 36 in urine protein-creatinine ratio based on a 24-h urine sample, assessed using mixed model repeated measures. Treatment-emergent adverse events (TEAEs) were safety endpoints. All endpoints were examined in all participants who received at least one dose of randomised treatment. The study is ongoing and is registered with ClinicalTrials.gov, NCT03762850. Findings: Between Dec 20, 2018, and May 26, 2021, 404 participants were randomly assigned to sparsentan (n=202) or irbesartan (n=202) and received treatment. At week 36, the geometric least squares mean percent change from baseline in urine protein-creatinine ratio was statistically significantly greater in the sparsentan group (-49·8%) than the irbesartan group (-15·1%), resulting in a between-group relative reduction of 41% (least squares mean ratio=0·59; 95% CI 0·51-0·69; p<0·0001). TEAEs with sparsentan were similar to irbesartan. There were no cases of severe oedema, heart failure, hepatotoxicity, or oedema-related discontinuations. Bodyweight changes from baseline were not different between the sparsentan and irbesartan groups. Interpretation: Once-daily treatment with sparsentan produced meaningful reduction in proteinuria compared with irbesartan in adults with IgA nephropathy. Safety of sparsentan was similar to irbesartan. Future analyses after completion of the 2-year double-blind period will show whether these beneficial effects translate into a long-term nephroprotective potential of sparsentan. Funding: Travere Therapeutics