50 research outputs found
BiasRV: Uncovering Biased Sentiment Predictions at Runtime
Sentiment analysis (SA) systems, though widely applied in many domains, have
been demonstrated to produce biased results. Some research works have been done
in automatically generating test cases to reveal unfairness in SA systems, but
the community still lacks tools that can monitor and uncover biased predictions
at runtime. This paper fills this gap by proposing BiasRV, the first tool to
raise an alarm when a deployed SA system makes a biased prediction on a given
input text. To implement this feature, BiasRV dynamically extracts a template
from an input text and from the template generates gender-discriminatory
mutants (semantically-equivalent texts that only differ in gender information).
Based on popular metrics used to evaluate the overall fairness of an SA system,
we define distributional fairness property for an individual prediction of an
SA system. This property specifies a requirement that for one piece of text,
mutants from different gender classes should be treated similarly as a whole.
Verifying the distributional fairness property causes much overhead to the
running system. To run more efficiently, BiasRV adopts a two-step heuristic:
(1) sampling several mutants from each gender and checking if the system
predicts them as of the same sentiment, (2) checking distributional fairness
only when sampled mutants have conflicting results. Experiments show that
compared to directly checking the distributional fairness property for each
input text, our two-step heuristic can decrease overhead used for analyzing
mutants by 73.81% while only resulting in 6.7% of biased predictions being
missed. Besides, BiasRV can be used conveniently without knowing the
implementation of SA systems. Future researchers can easily extend BiasRV to
detect more types of bias, e.g. race and occupation.Comment: Accepted to appear in the Demonstrations track of the ESEC/FSE 202
A Portrait of Emotion: Empowering Self-Expression through AI-Generated Art
We investigated the potential and limitations of generative artificial
intelligence (AI) in reflecting the authors' cognitive processes through
creative expression. The focus is on the AI-generated artwork's ability to
understand human intent (alignment) and visually represent emotions based on
criteria such as creativity, aesthetic, novelty, amusement, and depth. Results
show a preference for images based on the descriptions of the authors' emotions
over the main events. We also found that images that overrepresent specific
elements or stereotypes negatively impact AI alignment. Our findings suggest
that AI could facilitate creativity and the self-expression of emotions. Our
research framework with generative AIs can help design AI-based interventions
in related fields (e.g., mental health education, therapy, and counseling).Comment: Accepted CogSci 202