11 research outputs found

    The Black Voices in Research curriculum to promote diversity and inclusive excellence in biomedical research

    No full text
    Underrepresentation of Black biomedical researchers demonstrates continued racial inequity and lack of diversity in the field. The Black Voices in Research curriculum was designed to provide effective instructional materials that showcase inclusive excellence, facilitate the dialog about diversity and inclusion in biomedical research, enhance critical thinking and reflection, integrate diverse visions and worldviews, and ignite action. Instructional materials consist of short videos and discussion prompts featuring Black biomedical research faculty and professionals. Pilot evaluation of instructional content showed that individual stories promoted information relevance, increased knowledge, and created behavioral intention to promote diversity and inclusive excellence in biomedical research

    Peer review of GPT-4 technical report and systems card.

    No full text
    The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk

    Individual and Institutional Factors Contribute to Research Capacity Building for Early-Stage Investigators from Groups Underrepresented in Biomedical Research: A Qualitative Comparative Analysis

    No full text
    Background: Enhancement of diversity within the U.S. research workforce is a recognized need and priority at a national level. Existing comprehensive programs, such as the National Research Mentoring Network (NRMN) and Research Centers in Minority Institutions (RCMI), have the dual focus of building institutional research capacity and promoting investigator self-efficacy through mentoring and training. Methods: A qualitative comparative analysis was used to identify the combination of factors that explain the success and failure to submit a grant proposal by investigators underrepresented in biomedical research from the RCMI and non-RCMI institutions. The records of 211 participants enrolled in the NRMN Strategic Empowerment Tailored for Health Equity Investigators (NRMN-SETH) program were reviewed, and data for 79 early-stage, underrepresented faculty investigators from RCMI (n = 23) and non-RCMI (n = 56) institutions were included. Results: Institutional membership (RCMI vs. non-RCMI) was used as a possible predictive factor and emerged as a contributing factor for all of the analyses. Access to local mentors was predictive of a successful grant submission for RCMI investigators, while underrepresented investigators at non-RCMI institutions who succeeded with submitting grants still lacked access to local mentors. Conclusion: Institutional contexts contribute to the grant writing experiences of investigators underrepresented in biomedical research

    Randomized Controlled Study to Test the Effectiveness of Developmental Network Coaching in the Career Advancement of Diverse Early-Stage Investigators (ESIs): Implementation Challenges and Lessons Learned

    No full text
    Introduction: Adding developmental networks (DN) to grant-writing coaching can significantly enhance ESIs’ research careers. Herein, we present study design, ESIs’ characteristics and encountered challenges/lessons learned and their resolutions when deploying/implementing (a) NCR algorithm(s), (b) recruitment/retention and (c) implementing DN intervention. Methods: Nested Cluster Randomization (NCR) design governs this study implementation. The sample size is 220 ESIs intending to submit an NIH K, R, U, and/or Minority Supplement application(s). Primary outcome: intensity/sustainability of grant submission(s)/funding(s), measured by time to/between application(s). Outcome(s) analyses modes: summaries, Kaplan Meir and Cox proportional hazard models as a function of randomization groups and other predictors of outcomes. Results: In the present study, we recruited two cohorts of ESIs (N = 85): 39% African Americans, 18% Latinx, 18% Whites, 20% Asians and 6% Hawaiian/Pacific Islander/other ethnicities; 65% are women; 73% are assistant professors, 4% are Associate Professors and 23% are instructors/scientists/post-doctoral. Participants’ disciplines: 32% basic/biomedical, 36% clinical/translational and 32% social/behavioral. Proposal(s) mechanisms: 61% research grants (R series), 31% career development (K series), 7% support of competitive research (SCORE) and 1% National Science Foundation applications. NCR did produce balance in the distribution of ESIs’ demographics, sex at birth, ethnicity, professional appointments, background disciplines, and mechanism of sought funding. Lessons learned/challenges: NCR implementation was methodologically challenged during implementation by added constraints (e.g., assigning coaches to the same randomization arm of their participants as well as blinding them to ESIs’ randomization group). Recruitment and retention were hampered by the COVID-19 pandemic and more progressive and innovative strategies were needed to heighten the visibility and outreach of this program. DN delivery was also affected by the pandemic and monitoring of ESIs’ engagement and facilitation of communications interventions were needed. Resolution of these challenges effectively reconfigured NCR algorithms, recruitment/retention plans, and DN intervention delivery. We intend to recruit an additional 135 ESIs focusing on underrepresented scholars from RCMIs, CTSAs, and other programs. COVID-19 rendered this program 100% virtual, with recruitment/retention challenges and substantial disruption of ESIs’ research. We may extend the grant writing period, coaching, and Mock Study Section support

    Implications for future LLM research.

    No full text
    The study provides a comprehensive review of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4’s report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.</div

    Summary of GPT-4 TR review.

    No full text
    The study provides a comprehensive review of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4’s report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.</div

    Technical report of GPT-4 with line numbers.

    No full text
    The study provides a comprehensive review of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4’s report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.</div

    Review criteria and prompts to reviewers.

    No full text
    The study provides a comprehensive review of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4’s report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.</div
    corecore