Foundation models have rapidly permeated society, catalyzing a wave of
generative AI applications spanning enterprise and consumer-facing contexts.
While the societal impact of foundation models is growing, transparency is on
the decline, mirroring the opacity that has plagued past digital technologies
(e.g. social media). Reversing this trend is essential: transparency is a vital
precondition for public accountability, scientific innovation, and effective
governance. To assess the transparency of the foundation model ecosystem and
help improve transparency over time, we introduce the Foundation Model
Transparency Index. The Foundation Model Transparency Index specifies 100
fine-grained indicators that comprehensively codify transparency for foundation
models, spanning the upstream resources used to build a foundation model (e.g
data, labor, compute), details about the model itself (e.g. size, capabilities,
risks), and the downstream use (e.g. distribution channels, usage policies,
affected geographies). We score 10 major foundation model developers (e.g.
OpenAI, Google, Meta) against the 100 indicators to assess their transparency.
To facilitate and standardize assessment, we score developers in relation to
their practices for their flagship foundation model (e.g. GPT-4 for OpenAI,
PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about
the foundation model ecosystem: for example, no developer currently discloses
significant information about the downstream impact of its flagship model, such
as the number of users, affected market sectors, or how users can seek redress
for harm. Overall, the Foundation Model Transparency Index establishes the
level of transparency today to drive progress on foundation model governance
via industry standards and regulatory intervention.Comment: Authored by the Center for Research on Foundation Models (CRFM) at
the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Project page: https://crfm.stanford.edu/fmt