8,097 research outputs found
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Gender bias is highly impacting natural language processing applications.
Word embeddings have clearly been proven both to keep and amplify gender biases
that are present in current data sources. Recently, contextualized word
embeddings have enhanced previous word embedding techniques by computing word
vector representations dependent on the sentence they appear in.
In this paper, we study the impact of this conceptual change in the word
embedding computation in relation with gender bias. Our analysis includes
different measures previously applied in the literature to standard word
embeddings. Our findings suggest that contextualized word embeddings are less
biased than standard ones even when the latter are debiased
Implications for New Physics from Fine-Tuning Arguments: II. Little Higgs Models
We examine the fine-tuning associated to electroweak breaking in Little Higgs
scenarios and find it to be always substantial and, generically, much higher
than suggested by the rough estimates usually made. This is due to implicit
tunings between parameters that can be overlooked at first glance but show up
in a more systematic analysis. Focusing on four popular and representative
Little Higgs scenarios, we find that the fine-tuning is essentially comparable
to that of the Little Hierarchy problem of the Standard Model (which these
scenarios attempt to solve) and higher than in supersymmetric models. This does
not demonstrate that all Little Higgs models are fine-tuned, but stresses the
need of a careful analysis of this issue in model-building before claiming that
a particular model is not fine-tuned. In this respect we identify the main
sources of potential fine-tuning that should be watched out for, in order to
construct a successful Little Higgs model, which seems to be a non-trivial
goal.Comment: 39 pages, 26 ps figures, JHEP forma
- …