2,028 research outputs found
Can we predict a riot? Disruptive event detection using Twitter
In recent years, there has been increased interest in real-world event detection using publicly accessible data made available through Internet technology such as Twitter, Facebook, and YouTube. In these highly interactive systems, the general public are able to post real-time reactions to “real world” events, thereby acting as social sensors of terrestrial activity. Automatically detecting and categorizing events, particularly small-scale incidents, using streamed data is a non-trivial task but would be of high value to public safety organisations such as local police, who need to respond accordingly. To address this challenge, we present an end-to-end integrated event detection framework that comprises five main components: data collection, pre-processing, classification, online clustering, and summarization. The integration between classification and clustering enables events to be detected, as well as related smaller-scale “disruptive events,” smaller incidents that threaten social safety and security or could disrupt social order. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely temporal, spatial, and textual content. We evaluate our framework on a large-scale, real-world dataset from Twitter. Furthermore, we apply our event detection system to a large corpus of tweets posted during the August 2011 riots in England. We use ground-truth data based on intelligence gathered by the London Metropolitan Police Service, which provides a record of actual terrestrial events and incidents during the riots, and show that our system can perform as well as terrestrial sources, and even better in some cases
From Chaos to Clarity: Claim Normalization to Empower Fact-Checking
With the rise of social media, users are exposed to many misleading claims.
However, the pervasive noise inherent in these posts presents a challenge in
identifying precise and prominent claims that require verification. Extracting
the important claims from such posts is arduous and time-consuming, yet it is
an underexplored problem. Here, we aim to bridge this gap. We introduce a novel
task, Claim Normalization (aka ClaimNorm), which aims to decompose complex and
noisy social media posts into more straightforward and understandable forms,
termed normalized claims. We propose CACN, a pioneering approach that leverages
chain-of-thought and claim check-worthiness estimation, mimicking human
reasoning processes, to comprehend intricate claims. Moreover, we capitalize on
the in-context learning capabilities of large language models to provide
guidance and to improve claim normalization. To evaluate the effectiveness of
our proposed model, we meticulously compile a comprehensive real-world dataset,
CLAN, comprising more than 6k instances of social media posts alongside their
respective normalized claims. Our experiments demonstrate that CACN outperforms
several baselines across various evaluation measures. Finally, our rigorous
error analysis validates CACN's capabilities and pitfalls.Comment: Accepted at Findings EMNLP202
AI approaches to understand human deceptions, perceptions, and perspectives in social media
Social media platforms have created virtual space for sharing user generated information, connecting, and interacting among users. However, there are research and societal challenges: 1) The users are generating and sharing the disinformation 2) It is difficult to understand citizens\u27 perceptions or opinions expressed on wide variety of topics; and 3) There are overloaded information and echo chamber problems without overall understanding of the different perspectives taken by different people or groups.
This dissertation addresses these three research challenges with advanced AI and Machine Learning approaches. To address the fake news, as deceptions on the facts, this dissertation presents Machine Learning approaches for fake news detection models, and a hybrid method for topic identification, whether they are fake or real.
To understand the user\u27s perceptions or attitude toward some topics, this study analyzes the sentiments expressed in social media text. The sentiment analysis of posts can be used as an indicator to measure how topics are perceived by the users and how their perceptions as a whole can affect decision makers in government and industry, especially during the COVID-19 pandemic. It is difficult to measure the public perception of government policies issued during the pandemic. The citizen responses to the government policies are diverse, ranging from security or goodwill to confusion, fear, or anger. This dissertation provides a near real-time approach to track and monitor public reactions toward government policies by continuously collecting and analyzing Twitter posts about the COVID-19 pandemic.
To address the social media\u27s overwhelming number of posts, content echo-chamber, and information isolation issue, this dissertation provides a multiple view-based summarization framework where the same contents can be summarized according to different perspectives. This framework includes components of choosing the perspectives, and advanced text summarization approaches.
The proposed approaches in this dissertation are demonstrated with a prototype system to continuously collect Twitter data about COVID-19 government health policies and provide analysis of citizen concerns toward the policies, and the data is analyzed for fake news detection and for generating multiple-view summaries
DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging
Tagging news articles or blog posts with relevant tags from a collection of
predefined ones is coined as document tagging in this work. Accurate tagging of
articles can benefit several downstream applications such as recommendation and
search. In this work, we propose a novel yet simple approach called DocTag2Vec
to accomplish this task. We substantially extend Word2Vec and Doc2Vec---two
popular models for learning distributed representation of words and documents.
In DocTag2Vec, we simultaneously learn the representation of words, documents,
and tags in a joint vector space during training, and employ the simple
-nearest neighbor search to predict tags for unseen documents. In contrast
to previous multi-label learning methods, DocTag2Vec directly deals with raw
text instead of provided feature vector, and in addition, enjoys advantages
like the learning of tag representation, and the ability of handling newly
created tags. To demonstrate the effectiveness of our approach, we conduct
experiments on several datasets and show promising results against
state-of-the-art methods.Comment: 10 page
Recommended from our members
Human-Centered Technologies for Inclusive Collection and Analysis of Public-Generated Data
The meteoric rise in the popularity of public engagement platforms such as social media, customer review websites, and public input solicitation efforts strives for establishing an inclusive environment for the public to share their thoughts, ideas, opinions, and experiences. Many decisions made at a personal, local, or national scale are often fueled by data generated by the public. As such, inclusive collection, analysis, sensemaking, and utilization of pubic-generated data are crucial to support the exercise of successful decision-making processes. However, people often struggle to engage, participate, and share their opinions due to inaccessibility, the rigidity of traditional public engagement methods, and the lack of options to provide opinions while avoiding potential confrontations. Concurrently, data analysts and decision-makers grapple with the challenges of analyzing, sensemaking, and making informed decisions based on public-generated data, which includes high dimensionality, ambiguity present in human language, and a lack of tools and techniques catered to their needs. Novel technological interventions are therefore necessary to enable the public to share their input without barriers and allow decision-makers to capture, forage, peruse, and sublimate public-generated data into concrete and actionable insights.
The goal of this dissertation is to demonstrate how human-centered approaches involve the stakeholders in the design, development, and evaluation of tools and techniques that can lead to inclusive, effective, and efficient approaches to public-generated data collection and analysis to support informed decision-making. To that end, in this dissertation, I first addressed the challenges of empowering the public to share their opinions by exploring two major opinion-sharing avenues --- social media and public consultation. To learn more about people\u27s social media experiences and challenges, I built two technology probes and conducted a qualitative exploratory study with 16 participants. This study is followed up by exploring the challenges of inclusive participation during public consultations such as town halls. Based on a formative study with 66 participants and 20 organizers, I designed and developed CommunityClick to enable reticent share their opinions silently and anonymously during town halls. Equipped with the knowledge and experiences from these works, I designed, developed, and evaluated technologies and methods to facilitate and accelerate informed data-driven decision-making based on increased public-generated data. Based on interviews with 14 analysts and decision-makers in the civic domain, I built a visual analytics system CommunityClick that can facilitate public input analysis by surfacing hidden insights, people\u27s reflections, and priorities. Leveraging the lessons learned during this work, I created a visual text analytics system that supports serendipitous discovery and balanced analysis of textual data to help make informed decisions.
In this work, I contribute an understanding of how people collect and analyze public-generated data to fuel their decisions when they have increased exposure to alternative avenues for opinion-sharing. Through a series of human-centered studies, I highlight the challenges that inhibit inclusivity in opinion sharing and shortcomings of existing methods that prevent decision-makers to account for comprehensive public input that includes marginalized or unpopular opinions. To address these challenges, I designed, developed, and evaluated a collection of interactive systems including CommunityClick, CommunityPulse, and Serendyze. Through a rigorous set of evaluation strategies which include creativity sessions, controlled lab studies, in-the-wild deployment, and field experiments, I involved stakeholders to assess the effectiveness and utility of the built systems. Through the empirical evidence from these studies, I demonstrate how alternative designs for social media could enhance people\u27s social media experiences and enable them to make new connections with others to share opinions. In addition, I show how CommunityClick can be utilized to enable reticent attendees during public consultation to share their opinions while avoiding unwanted confrontation and allowing organizers to capture and account for silent feedback. I highlight how CommunityPulse allowed analysts and decision-makers to examine public input from multiple angles for an accelerated analysis and more informed decision-making. Furthermore, I demonstrate how supporting serendipitous discovery and balanced analysis using Serendyze can lead to more informed data-driven decision-making. I conclude the dissertation with a discussion on future avenues to expand this research including the facilitation of multi-user collaborative analysis, integration of multi-modal signals in the analysis of public-generated data, and potential adoption strategies for decision-support systems designed for inclusive collection and analysis of public-generated data
- …