This work describes a large-scale analysis of sentiment associations in
popular word embedding models along the lines of gender and ethnicity but also
along the less frequently studied dimensions of socioeconomic status, age,
sexual orientation, religious sentiment and political leanings. Consistent with
previous scholarly literature, this work has found systemic bias against given
names popular among African-Americans in most embedding models examined. Gender
bias in embedding models however appears to be multifaceted and often reversed
in polarity to what has been regularly reported. Interestingly, using the
common operationalization of the term bias in the fairness literature, novel
types of so far unreported bias types in word embedding models have also been
identified. Specifically, the popular embedding models analyzed here display
negative biases against middle and working-class socioeconomic status, male
children, senior citizens, plain physical appearance, Islamic religious faith,
non-religiosity and conservative political orientation. Reasons for the
paradoxical underreporting of these bias types in the relevant literature are
probably manifold but widely held blind spots when searching for algorithmic
bias and a lack of widespread technical jargon to unambiguously describe a
variety of algorithmic associations could conceivably be playing a role. The
causal origins for the multiplicity of loaded associations attached to distinct
demographic groups within embedding models are often unclear but the
heterogeneity of said associations and their potential multifactorial roots
raises doubts about the validity of grouping them all under the umbrella term
bias. Richer and more fine-grained terminology as well as a more comprehensive
exploration of the bias landscape could help the fairness epistemic community
to characterize and neutralize algorithmic discrimination more efficiently