67 research outputs found
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Pretrained language models are publicly available and constantly finetuned
for various real-life applications. As they become capable of grasping complex
contextual information, harmful biases are likely increasingly intertwined with
those models. This paper analyses gender bias in BERT models with two main
contributions: First, a novel bias measure is introduced, defining biases as
the difference in sentiment valuation of female and male sample versions.
Second, we comprehensively analyse BERT's biases on the example of a realistic
IMDB movie classifier. By systematically varying elements of the training
pipeline, we can conclude regarding their impact on the final model bias. Seven
different public BERT models in nine training conditions, i.e. 63 models in
total, are compared. Almost all conditions yield significant gender biases.
Results indicate that reflected biases stem from public BERT models rather than
task-specific data, emphasising the weight of responsible usage
Lexical competition between words, the body, and in social interaction Noise Ratios Task design Dependent Variables
International audienceObjective: To differentiate the effect of compounding demands, both corporal and social, on a cognitive task requiring the retrieval of competing lexical items in speech production.Methods: Three experimental groups of adults (ages 18-35) were recruited to complete one of three tasks followed by a questionnaire designed to measure emotion contagion. Experiment 1 had participants in a sitting position to complete a picture naming task. The task consisted of 500 test pictures that included groups of visuo-semantic neighbors (e.g., deer, elk, and antelope) that would lead to greater lexical competition as seen in spoken errors and/or reduced reaction times. A signal-to-noise ratio, known as 1/f noise, was calculated from the picture naming reaction times and used as a descriptor of individual differences. In Experiment 2, participants performed the picture naming task while standing. A 1/f noise ratio was calculated for each participants’ movement tracked online using a Microsoft Kinect. In Experiment 3 participants performed the picture naming task while standing in the same room as an experimenter that recorded the participants’ errors as they were being made. Results: Spoken errors and slower reaction times increased with task complexity, as did the randomness (i.e., white noise) of the 1/f noise ratio. Participants that experienced less lexical competition, succeeded in maintaining greater periodicity (i.e., pink noise) within their 1/f noise ratio for both picture naming reaction times and bodily movement. Participants more susceptible to emotion contagion, as measured in the questionnaire, were more likely to compound the effect of lexical competition in Experiment 3 due to the presence of the experimenter.Conclusion: The ability to control cognitive demands lessons as complexity increases due to online maintenance of cognitive, corporal and social cues. Cognitive control can be seen in those participants able to maintain periodicity within their responses to external stimuli
Language Models have a Moral Dimension
Artificial writing is permeating our lives due to recent advances in
large-scale, transformer-based language models (LMs) such as BERT, its
variants, GPT-2/3, and others. Using them as pretrained models and fine-tuning
them for specific tasks, researchers have extended the state of the art for
many NLP tasks and shown that they not only capture linguistic knowledge but
also retain general knowledge implicitly present in the data. These and other
successes are exciting. Unfortunately, LMs trained on unfiltered text corpora
suffer from degenerate and biased behaviour. While this is well established, we
show that recent improvements of LMs also store ethical and moral values of the
society and actually bring a ``moral dimension'' to surface: the values are
capture geometrically by a direction in the embedding space, reflecting well
the agreement of phrases to social norms implicitly expressed in the training
texts. This provides a path for attenuating or even preventing toxic
degeneration in LMs. Since one can now rate the (non-)normativity of arbitrary
phrases without explicitly training the LM for this task, the moral dimension
can be used as ``moral compass'' guiding (even other) LMs towards producing
normative text, as we will show
Facial Expressions of Sentence Comprehension
International audienceUnderstanding facial expressions allows access to one's intentional and affective states. Using the findings in psychology and neuroscience, in which physical behaviors of the face are linked to emotional states, this paper aims to study sentence comprehension shown by facial expressions. In our experiments, participants took part in a roughly 30-minute computer mediated task, where they were asked to answer either "true" or "false" to knowledge-based questions, then immediately given feedback of "correct" or "incorrect". Their faces, which were recorded during the task using the Kinect v2 device, are later used to identify the level of comprehension shown by their expressions. To achieve this, the SVM and Random Forest classifiers with facial appearance information extracted using a spatiotemporal local descriptor, named LPQ-TOP, are employed. Results of online sentence comprehension show that facial dynamics are promising to help understand cognitive states of the mind
ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition
The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots' joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots
ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition
The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots
Volume CXIV, Number 4, November 7, 1996
Objective: Turner syndrome (TS) is a chromosomal disorder caused by complete or partial X chromosome monosomy that manifests various clinical features depending on the karyotype and on the genetic background of affected girls. This study aimed to systematically investigate the key clinical features of TS in relationship to karyotype in a large pediatric Turkish patient population.Methods: Our retrospective study included 842 karyotype-proven TS patients aged 0-18 years who were evaluated in 35 different centers in Turkey in the years 2013-2014.Results: The most common karyotype was 45,X (50.7%), followed by 45,X/46,XX (10.8%), 46,X,i(Xq) (10.1%) and 45,X/46,X,i(Xq) (9.5%). Mean age at diagnosis was 10.2±4.4 years. The most common presenting complaints were short stature and delayed puberty. Among patients diagnosed before age one year, the ratio of karyotype 45,X was significantly higher than that of other karyotype groups. Cardiac defects (bicuspid aortic valve, coarctation of the aorta and aortic stenosis) were the most common congenital anomalies, occurring in 25% of the TS cases. This was followed by urinary system anomalies (horseshoe kidney, double collector duct system and renal rotation) detected in 16.3%. Hashimoto's thyroiditis was found in 11.1% of patients, gastrointestinal abnormalities in 8.9%, ear nose and throat problems in 22.6%, dermatologic problems in 21.8% and osteoporosis in 15.3%. Learning difficulties and/or psychosocial problems were encountered in 39.1%. Insulin resistance and impaired fasting glucose were detected in 3.4% and 2.2%, respectively. Dyslipidemia prevalence was 11.4%.Conclusion: This comprehensive study systematically evaluated the largest group of karyotype-proven TS girls to date. The karyotype distribution, congenital anomaly and comorbidity profile closely parallel that from other countries and support the need for close medical surveillance of these complex patients throughout their lifespa
- …