16 research outputs found
Developing Speech Processing Pipelines for Police Accountability
Police body-worn cameras have the potential to improve accountability and
transparency in policing. Yet in practice, they result in millions of hours of
footage that is never reviewed. We investigate the potential of large
pre-trained speech models for facilitating reviews, focusing on ASR and officer
speech detection in footage from traffic stops. Our proposed pipeline includes
training data alignment and filtering, fine-tuning with resource constraints,
and combining officer speech detection with ASR for a fully automated approach.
We find that (1) fine-tuning strongly improves ASR performance on officer
speech (WER=12-13%), (2) ASR on officer speech is much more accurate than on
community member speech (WER=43.55-49.07%), (3) domain-specific tasks like
officer speech detection and diarization remain challenging. Our work offers
practical applications for reviewing body camera footage and general guidance
for adapting pre-trained speech models to noisy multi-speaker domains.Comment: Accepted to INTERSPEECH 202
Gendered Mental Health Stigma in Masked Language Models
Mental health stigma prevents many individuals from receiving the appropriate
care, and social psychology studies have shown that mental health tends to be
overlooked in men. In this work, we investigate gendered mental health stigma
in masked language models. In doing so, we operationalize mental health stigma
by developing a framework grounded in psychology research: we use clinical
psychology literature to curate prompts, then evaluate the models' propensity
to generate gendered words. We find that masked language models capture
societal stigma about gender in mental health: models are consistently more
likely to predict female subjects than male in sentences about having a mental
health condition (32% vs. 19%), and this disparity is exacerbated for sentences
that indicate treatment-seeking behavior. Furthermore, we find that different
models capture dimensions of stigma differently for men and women, associating
stereotypes like anger, blame, and pity more with women with mental health
conditions than with men. In showing the complex nuances of models' gendered
mental health stigma, we demonstrate that context and overlapping dimensions of
identity are important considerations when assessing computational models'
social biases.Comment: EMNLP 202