8 research outputs found

    A Retrospective Case Study of Successful Translational Research: Gazelle Hb Variant Point-of-Care Diagnostic Device for Sickle Cell Disease

    Get PDF
    Evaluation researchers at Clinical and Translational Science Award (CTSA) hubs are conducting retrospective case studies to evaluate the translational research process. The objective of this study was to deepen knowledge of the translational process and identify contributors to successful translation. We investigated the successful translation of the HemeChip, a low-cost point-of-care diagnostic device for sickle cell disease, using a protocol for retrospective translational science case studies of health interventions developed by evaluators at the National Health Institutes (NIH) and CTSA hubs. Development of the HemeChip began in 2013 and evidence of device use and impact on public health is growing. Data collection methods included five interviews and a review of press, publications, patents, and grants. Barriers to translation included proving novelty, manufacturing costs, fundraising, and academic-industry relations. Facilitators to translation were CTSA pilot program funding, university resources, entrepreneurship training, due diligence, and collaborations. The barriers to translation, how they were overcome, and the key facilitators identified in this case study pinpoint areas for consideration in future funding mechanisms and the infrastructure required to enable successful translation

    Scholarly Productivity Evaluation of KL2 Scholars Using Bibliometrics and Federal Follow-on Funding: Cross-Institution Study

    No full text
    BackgroundEvaluating outcomes of the clinical and translational research (CTR) training of a Clinical and Translational Science Award (CTSA) hub (eg, the KL2 program) requires the selection of reliable, accessible, and standardized measures. As measures of scholarly success usually focus on publication output and extramural funding, CTSA hubs have started to use bibliometrics to evaluate the impact of their supported scholarly activities. However, the evaluation of KL2 programs across CTSAs is limited, and the use of bibliometrics and follow-on funding is minimal. ObjectiveThis study seeks to evaluate scholarly productivity, impact, and collaboration using bibliometrics and federal follow-on funding of KL2 scholars from 3 CTSA hubs and to define and assess CTR training success indicators. MethodsThe sample included KL2 scholars from 3 CTSA institutions (A-C). Bibliometric data for each scholar in the sample were collected from both SciVal and iCite, including scholarly productivity, citation impact, and research collaboration. Three federal follow-on funding measures (at the 5-year, 8-year, and overall time points) were collected internally and confirmed by examining a federal funding database. Both descriptive and inferential statistical analyses were computed using SPSS to assess the bibliometric and federal follow-on funding results. ResultsA total of 143 KL2 scholars were included in the sample with relatively equal groups across the 3 CTSA institutions. The included KL2 scholars produced more publications and citation counts per year on average at the 8-year time point (3.75 publications and 26.44 citation counts) than the 5-year time point (3.4 publications vs 26.16 citation counts). Overall, the KL2 publications from all 3 institutions were cited twice as much as others in their fields based on the relative citation ratio. KL2 scholars published work with researchers from other US institutions over 2 times (5-year time point) or 3.5 times (8-year time point) more than others in their research fields. Within 5 years and 8 years postmatriculation, 44.1% (63/143) and 51.7% (74/143) of KL2 scholars achieved federal funding, respectively. The KL2-scholars of Institution C had a significantly higher citation rate per publication than the other institutions (P<.001). Institution A had a significantly lower rate of nationally field-weighted collaboration than did the other institutions (P<.001). Institution B scholars were more likely to have received federal funding than scholars at Institution A or C (P<.001). ConclusionsMulti-institutional data showed a high level of scholarly productivity, impact, collaboration, and federal follow-on funding achieved by KL2 scholars. This study provides insights on the use of bibliometric and federal follow-on funding data to evaluate CTR training success across institutions. CTSA KL2 programs and other CTR career training programs can benefit from these findings in terms of understanding metrics of career success and using that knowledge to develop highly targeted strategies to support early-stage career development of CTR investigators
    corecore