84 research outputs found
Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms
We compared the data reliability on a subjective task from two platforms: Amazon's Mechanical Turk (MTurk) and LabintheWild. MTurk incentivizes participants with financial compensation while LabintheWild provides participants with personalized feedback. LabintheWild was found to produce higher data reliability than MTurk. Our findings suggest that online experiment platforms providing feedback in exchange for study participation can produce more reliable data in subjective preference tasks than those offering financial compensation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/134704/1/Ye et al. 2017.pd
Why girls play: results of a qualitative interview study with female video game players
"Qualitative interviews with 7 female players were conducted to gather information on the motives and attitudes of female users of video and computer games. Participants were asked about the importance of different gratifications of game play, critical incidents that initiated their interest in games and their perceived competence in the use of computer technology. Special attention was paid to potential shortcomings of contemporary video and computer games in addressing female players specific needs and the question whether female users can identify with in-game characters of today's computer games. The results indicate that the motive to win is of minor importance for female players. Additionally, many interviewees reported a lack of support for their hobby, especially from same-sex friends. Identification with the avatar is an important component of the gaming experience for the female players in this study. At the same time, contemporary computer games that are often situated in primarily masculine contexts (e.g. war, competition) make it difficult for female users to identify with in-game characters." (author's abstract
Tea: A High-level Language and Runtime System for Automating Statistical Analysis
Though statistical analyses are centered on research questions and
hypotheses, current statistical analysis tools are not. Users must first
translate their hypotheses into specific statistical tests and then perform API
calls with functions and parameters. To do so accurately requires that users
have statistical expertise. To lower this barrier to valid, replicable
statistical analysis, we introduce Tea, a high-level declarative language and
runtime system. In Tea, users express their study design, any parametric
assumptions, and their hypotheses. Tea compiles these high-level specifications
into a constraint satisfaction problem that determines the set of valid
statistical tests, and then executes them to test the hypothesis. We evaluate
Tea using a suite of statistical analyses drawn from popular tutorials. We show
that Tea generally matches the choices of experts while automatically switching
to non-parametric tests when parametric assumptions are not met. We simulate
the effect of mistakes made by non-expert users and show that Tea automatically
avoids both false negatives and false positives that could be produced by the
application of incorrect statistical tests.Comment: 11 page
The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across Computer Science
From smart sensors that infringe on our privacy to neural nets that portray
realistic imposter deepfakes, our society increasingly bears the burden of
negative, if unintended, consequences of computing innovations. As the experts
in the technology we create, Computer Science (CS) researchers must do better
at anticipating and addressing these undesirable consequences proactively. Our
prior work showed that many of us recognize the value of thinking preemptively
about the perils our research can pose, yet we tend to address them only in
hindsight. How can we change the culture in which considering undesirable
consequences of digital technology is deemed as important, but is not commonly
done?Comment: More details at NSF #2315937:
https://www.nsf.gov/awardsearch/showAward?AWD_ID=2315937&HistoricalAwards=fals
"That's important, but...": How Computer Science Researchers Anticipate Unintended Consequences of Their Research Innovations
Computer science research has led to many breakthrough innovations but has
also been scrutinized for enabling technology that has negative, unintended
consequences for society. Given the increasing discussions of ethics in the
news and among researchers, we interviewed 20 researchers in various CS
sub-disciplines to identify whether and how they consider potential unintended
consequences of their research innovations. We show that considering unintended
consequences is generally seen as important but rarely practiced. Principal
barriers are a lack of formal process and strategy as well as the academic
practice that prioritizes fast progress and publications. Drawing on these
findings, we discuss approaches to support researchers in routinely considering
unintended consequences, from bringing diverse perspectives through community
participation to increasing incentives to investigate potential consequences.
We intend for our work to pave the way for routine explorations of the societal
implications of technological innovations before, during, and after the
research process.Comment: Corresponding author: Rock Yuren Pang, email provided below. Kimberly
Do and Rock Yuren Pang contributed equally to this research. The author order
is listed alphabetically. To appear in CHI Conference on Human Factors in
Computing Systems (CHI '23), April 23-April 28, 2023, Hamburg, Germany. ACM,
New York, NY, USA, 16 page
Imagine a dragon made of seaweed: How images enhance learning in Wikipedia
Though images are ubiquitous across Wikipedia, it is not obvious that the
image choices optimally support learning. When well selected, images can
enhance learning by dual coding, complementing, or supporting articles. When
chosen poorly, images can mislead, distract, and confuse. We developed a large
dataset containing 470 questions & answers to 94 Wikipedia articles with images
on a wide range of topics. Through an online experiment (n=704), we determined
whether the images displayed alongside the text of the article are effective in
helping readers understand and learn. For certain tasks, such as learning to
identify targets visually (e.g., "which of these pictures is a gujia?"),
article images significantly improve accuracy. Images did not significantly
improve general knowledge questions (e.g., "where are gujia from?"). Most
interestingly, only some images helped with visual knowledge questions (e.g.,
"what shape is a gujia?"). Using our findings, we reflect on the implications
for editors and tools to support image selection.Comment: 16 pages, 10 figure
Recommended from our members
Quantifying visual preferences around the world
Website aesthetics have been recognized as an influential moderator of people's behavior and perception. However, what users perceive as "good design" is subject to individual preferences, questioning the feasibility of universal design guidelines. To better understand how people's visual preferences differ, we collected 2.4 million ratings of the visual appeal of websites from nearly 40 thousand participants of diverse backgrounds. We address several gaps in the knowledge about design preferences of previously understudied groups. Among other findings, our results show that the level of colorfulness and visual complexity at which visual appeal is highest strongly varies: Females, for example, liked colorful websites more than males. A high education level generally lowers this preference for colorfulness. Russians preferred a lower visual complexity, and Macedonians liked highly colorful designs more than any other country in our dataset. We contribute a computational model and estimates of peak appeal that can be used to support rapid evaluations of website design prototypes for specific target groups.Engineering and Applied Science
NLPositionality: Characterizing Design Biases of Datasets and Models
Design biases in NLP systems, such as performance differences for different
populations, often stem from their creator's positionality, i.e., views and
lived experiences shaped by identity and background. Despite the prevalence and
risks of design biases, they are hard to quantify because researcher, system,
and dataset positionality is often unobserved. We introduce NLPositionality, a
framework for characterizing design biases and quantifying the positionality of
NLP datasets and models. Our framework continuously collects annotations from a
diverse pool of volunteer participants on LabintheWild, and statistically
quantifies alignment with dataset labels and model predictions. We apply
NLPositionality to existing datasets and models for two tasks -- social
acceptability and hate speech detection. To date, we have collected 16,299
annotations in over a year for 600 instances from 1,096 annotators across 87
countries. We find that datasets and models align predominantly with Western,
White, college-educated, and younger populations. Additionally, certain groups,
such as non-binary people and non-native English speakers, are further
marginalized by datasets and models as they rank least in alignment across all
tasks. Finally, we draw from prior literature to discuss how researchers can
examine their own positionality and that of their datasets and models, opening
the door for more inclusive NLP systems.Comment: ACL 202
Neuropsychological functions of nonverbal hand movements and gestures during sports
Emotional body-distant gestures are a prominent feature of winning athletes. Because negative emotions have been associated to increased self-touch behaviour, we investigated the hypothesis that athletes change from a more body-distant nonverbal hand movement behaviour when winning to a body-focused behaviour when losing. Nonverbal hand movements of professional right-handed tennis athletes were videotaped during competition and analyzed by certified raters using the NEUROpsychological GESture(NEUROGES)System. The results showed that losing athletes increase their irregular, on body, and phasic on body hand movements, particularly with the left hand. Emotion / attitude rise gestures with the right hand characterised winning athletes. The data suggest that the nonverbal hand movements of athletes serve different neuropsychological functions. Winners nonverbally express their positive feelings by body-distant gestures but change towards their own body to regulate stress when losing
- …