127 research outputs found
14 Years of Self-Tracking Technology for mHealth -- Literature Review: Lessons Learnt and the PAST SELF Framework
In today's connected society, many people rely on mHealth and self-tracking
(ST) technology to help them adopt healthier habits with a focus on breaking
their sedentary lifestyle and staying fit. However, there is scarce evidence of
such technological interventions' effectiveness, and there are no standardized
methods to evaluate their impact on people's physical activity (PA) and health.
This work aims to help ST practitioners and researchers by empowering them with
systematic guidelines and a framework for designing and evaluating
technological interventions to facilitate health behavior change (HBC) and user
engagement (UE), focusing on increasing PA and decreasing sedentariness. To
this end, we conduct a literature review of 129 papers between 2008 and 2022,
which identifies the core ST HCI design methods and their efficacy, as well as
the most comprehensive list to date of UE evaluation metrics for ST. Based on
the review's findings, we propose PAST SELF, a framework to guide the design
and evaluation of ST technology that has potential applications in industrial
and scientific settings. Finally, to facilitate researchers and practitioners,
we complement this paper with an open corpus and an online, adaptive
exploration tool for the PAST SELF data.Comment: 40 pages, 10 figure
Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter
Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.Comment: In 28th ACM Conference on Hypertext and Social Media (ACM HyperText
2017
Uncovering Bias in Personal Informatics
Personal informatics (PI) systems, powered by smartphones and wearables,
enable people to lead healthier lifestyles by providing meaningful and
actionable insights that break down barriers between users and their health
information. Today, such systems are used by billions of users for monitoring
not only physical activity and sleep but also vital signs and women's and heart
health, among others. %Despite their widespread usage, the processing of
particularly sensitive personal data, and their proximity to domains known to
be susceptible to bias, such as healthcare, bias in PI has not been
investigated systematically. Despite their widespread usage, the processing of
sensitive PI data may suffer from biases, which may entail practical and
ethical implications. In this work, we present the first comprehensive
empirical and analytical study of bias in PI systems, including biases in raw
data and in the entire machine learning life cycle. We use the most detailed
framework to date for exploring the different sources of bias and find that
biases exist both in the data generation and the model learning and
implementation streams. According to our results, the most affected minority
groups are users with health issues, such as diabetes, joint issues, and
hypertension, and female users, whose data biases are propagated or even
amplified by learning models, while intersectional biases can also be observed
Every vote you make: attachment and state culture predict bipartisanship in U.S. Congress
Do politicians' relational traits predict their bipartisan voting behavior? In this paper, we empirically test and find that relational individual dispositions, namely attachment orientations and conformity to cultural norms, can predict the bipartisan voting behavior of politicians in the United States House of Representatives and Senate. We annotated politicians' tweets using a machine learning approach paired with archival resources to obtain politicians' home-state looseness-tightness culture scores. Anxiously-attached politicians were less likely to be bipartisan than avoidantly-attached individuals. Bipartisan voting behavior was less likely in politicians whose home state was less tolerant of deviation from cultural norms. We discuss these results and possible implications, such as the preemptive assessment of politicians' bipartisanship likelihood based on attachment and state cultural pressure to adhere to group norms.info:eu-repo/semantics/publishedVersio
Large scale crowdsourcing and characterization of Twitter abusive behavior
In recent years online social networks have suffered an increase in sexism, racism, and other types of aggressive and cyberbullying behavior, often manifesting itself through offensive, abusive, or hateful language. Past scientific work focused on studying these forms of abusive activity in popular online social networks, such as Facebook and Twitter. Building on such work, we present an eight month study of the various forms of abusive behavior on Twitter, in a holistic fashion. Departing from past work, we examine a wide variety of labeling schemes, which cover different forms of abusive behavior. We propose an incremental and iterative methodology that leverages the power of crowdsourcing to annotate a large collection of tweets with a set of abuse-related labels.By applying our methodology and performing statistical analysis for label merging or elimination, we identify a reduced but robust set of labels to characterize abuse-related tweets. Finally, we offer a characterization of our annotated dataset
of 80 thousand tweets, which we make publicly available for further scientific exploration.Accepted manuscrip
Bias in Internet Measurement Platforms
Network operators and researchers frequently use Internet measurement
platforms (IMPs), such as RIPE Atlas, RIPE RIS, or RouteViews for, e.g.,
monitoring network performance, detecting routing events, topology discovery,
or route optimization. To interpret the results of their measurements and avoid
pitfalls or wrong generalizations, users must understand a platform's
limitations. To this end, this paper studies an important limitation of IMPs,
the \textit{bias}, which exists due to the non-uniform deployment of the
vantage points. Specifically, we introduce a generic framework to
systematically and comprehensively quantify the multi-dimensional (e.g., across
location, topology, network types, etc.) biases of IMPs. Using the framework
and open datasets, we perform a detailed analysis of biases in IMPs that
confirms well-known (to the domain experts) biases and sheds light on
less-known or unexplored biases. To facilitate IMP users to obtain awareness of
and explore bias in their measurements, as well as further research and
analyses (e.g., methods for mitigating bias), we publicly share our code and
data, and provide online tools (API, Web app, etc.) that calculate and
visualize the bias in measurement setups
Beyond Accuracy: A Critical Review of Fairness in Machine Learning for Mobile and Wearable Computing
The field of mobile, wearable, and ubiquitous computing (UbiComp) is
undergoing a revolutionary integration of machine learning. Devices can now
diagnose diseases, predict heart irregularities, and unlock the full potential
of human cognition. However, the underlying algorithms are not immune to biases
with respect to sensitive attributes (e.g., gender, race), leading to
discriminatory outcomes. The research communities of HCI and AI-Ethics have
recently started to explore ways of reporting information about datasets to
surface and, eventually, counter those biases. The goal of this work is to
explore the extent to which the UbiComp community has adopted such ways of
reporting and highlight potential shortcomings. Through a systematic review of
papers published in the Proceedings of the ACM Interactive, Mobile, Wearable
and Ubiquitous Technologies (IMWUT) journal over the past 5 years (2018-2022),
we found that progress on algorithmic fairness within the UbiComp community
lags behind. Our findings show that only a small portion (5%) of published
papers adheres to modern fairness reporting, while the overwhelming majority
thereof focuses on accuracy or error metrics. In light of these findings, our
work provides practical guidelines for the design and development of ubiquitous
technologies that not only strive for accuracy but also for fairness
- …