Large language models are quickly gaining momentum, yet are found to
demonstrate gender bias in their responses. In this paper, we conducted a
content analysis of social media discussions to gauge public perceptions of
gender bias in LLMs which are trained in different cultural contexts, i.e.,
ChatGPT, a US-based LLM, or Ernie, a China-based LLM. People shared both
observations of gender bias in their personal use and scientific findings about
gender bias in LLMs. A difference between the two LLMs was seen -- ChatGPT was
more often found to carry implicit gender bias, e.g., associating men and women
with different profession titles, while explicit gender bias was found in
Ernie's responses, e.g., overly promoting women's pursuit of marriage over
career. Based on the findings, we reflect on the impact of culture on gender
bias and propose governance recommendations to regulate gender bias in LLMs