Artificial intelligence (AI) can potentially transform global health, but
algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI
entails the intentional design to ensure equity and mitigate potential biases.
To advance trustworthy AI in global health, we convened a workshop on Fairness
in Machine Intelligence for Global Health (FairMI4GH). The event brought
together a global mix of experts from various disciplines, community health
practitioners, policymakers, and more. Topics covered included managing AI bias
in socio-technical systems, AI's potential impacts on global health, and
balancing data privacy with transparency. Panel discussions examined the
cultural, political, and ethical dimensions of AI in global health. FairMI4GH
aimed to stimulate dialogue, facilitate knowledge transfer, and spark
innovative solutions. Drawing from NIST's AI Risk Management Framework, it
provided suggestions for handling AI risks and biases. The need to mitigate
data biases from the research design stage, adopt a human-centered approach,
and advocate for AI transparency was recognized. Challenges such as updating
legal frameworks, managing cross-border data sharing, and motivating developers
to reduce bias were acknowledged. The event emphasized the necessity of diverse
viewpoints and multi-dimensional dialogue for creating a fair and ethical AI
framework for equitable global health.Comment: 7 page