16 research outputs found

    Examining the Impact of Uncontrolled Variables on Physiological Signals in User Studies for Information Processing Activities

    Full text link
    Physiological signals can potentially be applied as objective measures to understand the behavior and engagement of users interacting with information access systems. However, the signals are highly sensitive, and many controls are required in laboratory user studies. To investigate the extent to which controlled or uncontrolled (i.e., confounding) variables such as task sequence or duration influence the observed signals, we conducted a pilot study where each participant completed four types of information-processing activities (READ, LISTEN, SPEAK, and WRITE). Meanwhile, we collected data on blood volume pulse, electrodermal activity, and pupil responses. We then used machine learning approaches as a mechanism to examine the influence of controlled and uncontrolled variables that commonly arise in user studies. Task duration was found to have a substantial effect on the model performance, suggesting it represents individual differences rather than giving insight into the target variables. This work contributes to our understanding of such variables in using physiological signals in information retrieval user studies.Comment: Accepted to the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23

    Are footpaths encroached by shared e-scooters? Spatio-temporal Analysis of Micro-mobility Services

    Full text link
    Micro-mobility services (e.g., e-bikes, e-scooters) are increasingly popular among urban communities, being a flexible transport option that brings both opportunities and challenges. As a growing mode of transportation, insights gained from micro-mobility usage data are valuable in policy formulation and improving the quality of services. Existing research analyses patterns and features associated with usage distributions in different localities, and focuses on either temporal or spatial aspects. In this paper, we employ a combination of methods that analyse both spatial and temporal characteristics related to e-scooter trips in a more granular level, enabling observations at different time frames and local geographical zones that prior analysis wasn't able to do. The insights obtained from anonymised, restricted data on shared e-scooter rides show the applicability of the employed method on regulated, privacy preserving micro-mobility trip data. Our results showed population density is the topmost important feature, and it associates with e-scooter usage positively. Population owning motor vehicles is negatively associated with shared e-scooter trips, suggesting a reduction in e-scooter usage among motor vehicle owners. Furthermore, we found that the effect of humidity is more important than precipitation in predicting hourly e-scooter trip count. Buffer analysis showed, nearly 29% trips were stopped, and 27% trips were started on the footpath, revealing higher utilisation of footpaths for parking e-scooters in Melbourne.Comment: Accepted to IEEE International Conference on Mobile Data Managemen

    Designing and Evaluating Presentation Strategies for Fact-Checked Content

    Full text link
    With the rapid growth of online misinformation, it is crucial to have reliable fact-checking methods. Recent research on finding check-worthy claims and automated fact-checking have made significant advancements. However, limited guidance exists regarding the presentation of fact-checked content to effectively convey verified information to users. We address this research gap by exploring the critical design elements in fact-checking reports and investigating whether credibility and presentation-based design improvements can enhance users' ability to interpret the report accurately. We co-developed potential content presentation strategies through a workshop involving fact-checking professionals, communication experts, and researchers. The workshop examined the significance and utility of elements such as veracity indicators and explored the feasibility of incorporating interactive components for enhanced information disclosure. Building on the workshop outcomes, we conducted an online experiment involving 76 crowd workers to assess the efficacy of different design strategies. The results indicate that proposed strategies significantly improve users' ability to accurately interpret the verdict of fact-checking articles. Our findings underscore the critical role of effective presentation of fact reports in addressing the spread of misinformation. By adopting appropriate design enhancements, the effectiveness of fact-checking reports can be maximized, enabling users to make informed judgments.Comment: Accepted to the 32nd ACM International Conference on Information and Knowledge Management (CIKM '23

    How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets

    Full text link
    Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and thus highly sensitive to biases stemming from annotator beliefs, characteristics and demographics. We conduct two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech. Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech. In addition, semi-structured interviews with nine crowd workers provide further insights regarding the influence of subjectivity on annotations. In exploring how workers interpret a task - shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors - we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.Comment: Accepted to the 11th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023

    How Context Influences Cross-Device Task Acceptance in Crowd Work

    No full text
    Although crowd work is typically completed through desktop or laptop computers by workers at their home, literature has shown that crowdsourcing is feasible through a wide array of computing devices, including smartphones and digital voice assistants. An integrated crowdsourcing platform that operates across multiple devices could provide greater flexibility to workers, but there is little understanding of crowd workers’ perceptions on uptaking crowd tasks across multiple contexts through such devices. Using a crowdsourcing survey task, we investigate workers’ willingness to accept different types of crowd tasks presented on three device types in different scenarios of varying location, time and social context. Through analysis of over 25,000 responses received from 329 crowd workers on Amazon Mechanical Turk, we show that when tasks are presented on different devices, the task acceptance rate is 80.5% on personal computers, 77.3% on smartphones and 70.7% on digital voice assistants. Our results also show how different contextual factors such as location, social context and time influence workers decision to accept a task on a given device. Our findings provide important insights towards the development of effective task assignment mechanisms for cross-device crowd platforms

    CrowdCog:A Cognitive Skill based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing

    No full text
    While crowd workers typically complete a variety of tasks in crowdsourcing platforms, there is no widely accepted method to successfully match workers to different types of tasks. Researchers have considered using worker demographics, behavioural traces, and prior task completion records to optimise task assignment. However, optimum task assignment remains a challenging research problem due to limitations of proposed approaches, which in turn can have a significant impact on the future of crowdsourcing. We present 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks. Our work extends prior work that highlights the effect of workers' cognitive ability on crowdsourcing task performance. Our study, deployed on Amazon Mechanical Turk, involved 574 workers and 983 HITs that span across four typical crowd tasks (Classification, Counting, Transcription, and Sentiment Analysis). Our results show that both our assignment method and recommendation method result in a significant performance increase (5% to 20%) as compared to a generic or random task assignment. Our findings pave the way for the use of quick cognitive tests to provide robust recommendations and assignments to crowd workers

    Effect of conformity on perceived trustworthiness of news in social media

    No full text
    Abstract A catalyst for the spread of fake news is the existence of comments that users make in support of, or against, such articles. In this article, we investigate whether critical and supportive comments can induce conformity in how readers perceive trustworthiness of news articles and respond to them. We find that individuals tend to conform to the majority’s opinion of an article’s trustworthiness (58%), especially when challenged by larger majorities who are critical of the article’s credibility, or when less confident about their personal judgment. Moreover, we find that individuals who conform are more inclined to take action: to report articles they perceive as fake, and to comment on and share articles they perceive as real. We conclude with a discussion on the implications of our findings for mitigating the dispersion of fake news on social media

    Augmenting Automated Kinship Verification with Targeted Human Input

    No full text
    Kinship verification is the problem whereby a third party determines whether two people are related. Despite previous research in Psychology and Machine Vision, the factors affecting a person’s verification ability are poorly understood. Through an online crowdsourcing study, we investigate the impact of gender, race and medium type (image vs video) on kinship verification - taking into account the demographics of both raters and ratees. A total of 325 workers completed over 50,000 kinship verification tasks consisting of pairs of faces shown in images and videos from three widely used datasets. Our results identify an own-race bias and a higher verification accuracy for same-gender image pairs than opposite-gender image pairs. Our results demonstrate that humans can still outperform current state-of-the-art automated unsupervised approaches. Furthermore, we show that humans perform better when presented with videos instead of still images. Our findings contribute to the design of future human-in-the-loop kinship verification tasks, including time-critical use cases such as identifying missing persons
    corecore