34 research outputs found
Multi-Criteria Assignment Techniques in Multi- Dimensional Neutrosophic Soft Set Theory
In this paper, we have introduced a new concept of multi-dimensional neutrosophic soft sets together with various operations, properties and theorems on them. Then we have proposed an algorithm named 2-DNS based on our proposed two-dimensional neutrosophic soft set for solving neutrosophic multi-criteria assignment problems with multiple decision makers
Cost, Precision, and Task Structure in Aggression-based Arbitration for Minimalist Robot Cooperation
Multi-robot systems have the potential to improve performance through parallelism. Unfortunately, interference often diminishes those returns. Starting from the earliest multi-robot research, a variety of arbitration mechanisms have been proposed
to maximize speed-up. Vaughan and his collaborators demonstrated the effectiveness of an arbitration mechanism inspired by biological signalling where the level of
aggression displayed by each agent effectively prioritizes the limited resources. But
most often these arbitration mechanisms did not do any principled consideration of environmental constraints or task structure, signalling cost and precision of the outcome. These factors have been taken into consideration in this research and a taxonomy of the arbitration mechanisms have been presented. The taxonomy organizes prior techniques and newly introduced novel techniques. The latter include theoretical and practical mechanisms (from minimalist to especially efficient). Practicable
mechanisms were evaluated on physical robots for which both data and models are presented. The arbitration mechanisms described span a whole gamut from implicit
(in case of robotics, entirely without representation) to deliberately coordinated (via an established Biological model, reformulated from a Bayesian perspective).
Another significant result of this thesis is a systematic characterization of system
performance across parameters that describe the task structure: patterns of interference are related to a set of strings that can be expressed exactly. This analysis of the domain has the important (and rare) property of completeness, i.e., all possible abstract variations of the task are understood. This research presents efficiency results
showing that a characterization for any given instance can be obtained in sub-linear
time. It has been shown, by construction, that: (1) Even an ideal arbitration mechanism can perform arbitrarily poorly; (2) Agents may manipulate task-structure for individual and collective good; (3) Task variations affect the influence that initial conditions have on long-term behaviour; (4) The most complex interference dynamics
possible for the scenario is a limit cycle behaviour
A Survey on Event-based News Narrative Extraction
Narratives are fundamental to our understanding of the world, providing us
with a natural structure for knowledge representation over time. Computational
narrative extraction is a subfield of artificial intelligence that makes heavy
use of information retrieval and natural language processing techniques.
Despite the importance of computational narrative extraction, relatively little
scholarly work exists on synthesizing previous research and strategizing future
research in the area. In particular, this article focuses on extracting news
narratives from an event-centric perspective. Extracting narratives from news
data has multiple applications in understanding the evolving information
landscape. This survey presents an extensive study of research in the area of
event-based news narrative extraction. In particular, we screened over 900
articles that yielded 54 relevant articles. These articles are synthesized and
organized by representation model, extraction criteria, and evaluation
approaches. Based on the reviewed studies, we identify recent trends, open
challenges, and potential research lines.Comment: 37 pages, 3 figures, to be published in the journal ACM CSU
Assessing enactment of content regulation policies: A post hoc crowd-sourced audit of election misinformation on YouTube
With the 2022 US midterm elections approaching, conspiratorial claims about
the 2020 presidential elections continue to threaten users' trust in the
electoral process. To regulate election misinformation, YouTube introduced
policies to remove such content from its searches and recommendations. In this
paper, we conduct a 9-day crowd-sourced audit on YouTube to assess the extent
of enactment of such policies. We recruited 99 users who installed a browser
extension that enabled us to collect up-next recommendation trails and search
results for 45 videos and 88 search queries about the 2020 elections. We find
that YouTube's search results, irrespective of search query bias, contain more
videos that oppose rather than support election misinformation. However,
watching misinformative election videos still lead users to a small number of
misinformative videos in the up-next trails. Our results imply that while
YouTube largely seems successful in regulating election misinformation, there
is still room for improvement.Comment: 22 page
ReCOVery: A Multimodal Repository for COVID-19 News Credibility Research
First identified in Wuhan, China, in December 2019, the outbreak of COVID-19
has been declared as a global emergency in January, and a pandemic in March
2020 by the World Health Organization (WHO). Along with this pandemic, we are
also experiencing an "infodemic" of information with low credibility such as
fake news and conspiracies. In this work, we present ReCOVery, a repository
designed and constructed to facilitate research on combating such information
regarding COVID-19. We first broadly search and investigate ~2,000 news
publishers, from which 60 are identified with extreme [high or low] levels of
credibility. By inheriting the credibility of the media on which they were
published, a total of 2,029 news articles on coronavirus, published from
January to May 2020, are collected in the repository, along with 140,820 tweets
that reveal how these news articles have spread on the Twitter social network.
The repository provides multimodal information of news articles on coronavirus,
including textual, visual, temporal, and network information. The way that news
credibility is obtained allows a trade-off between dataset scalability and
label accuracy. Extensive experiments are conducted to present data statistics
and distributions, as well as to provide baseline performances for predicting
news credibility so that future methods can be compared. Our repository is
available at http://coronavirus-fakenews.com.Comment: Proceedings of the 29th ACM International Conference on Information
and Knowledge Management (CIKM '20
Understanding social media credibility
Today, social media provide the means by which billions of people experience news and events happening around the world. We hear about breaking news from people we “follow” on Twitter. We engage in discussions about unfolding news stories with our “friends” on Facebook. We tend to read and respond to strangers sharing newsworthy information on Reddit. Simply put, individuals are increasingly relying on social media to share news and information quickly, without relying on established official sources. While on one hand this empowers us with unparalleled information access, on the other hand it presents a new challenge — the challenge of ensuring that the unfiltered information originating from unofficial sources is credible. In fact, there is a popular narrative that social media is full of inaccurate information. But how much? Does information with dubious credibility have structure — temporal or linguistic? Are there systematic variations in such structures between highly credible and less credible information? This dissertation finds answers to such questions. Despite many organized attempts along this line of research, social media is still opaque to the credibility of news and information. When you view your social media feed, you have no sense as to which parts are reliable and which are not. In other words, we do not understand the basic properties separating credible and non-credible content in our social feeds. This dissertation addresses this gap, by building large-scale, generalizable science around credibility in social media. Specifically, this dissertation makes the following contributions. First, it offers an iterative framework for systematically tracking the credibility of social media information. My framework combines machine and human computation in an efficient way to track both less well-known and widespread instances of newsworthy content in real-time, followed by crowd-sourcing credibility assessments. Next, by running the framework for several months on the popular social networking site – Twitter, I present a corpus (CREDBANK) with newsworthy topics, their associated tweets and corresponding credibility scores. Combining the massive dataset of CREDBANK with linguistic scholarship, I show that a parsimonious language model can predict the credibility of newsworthy topics with an accuracy of 68% compared to a random baseline of 25%. A deeper look at the most predictive phrases revealed that certain classes of words, like hedges, were associated with lower credibility, while affirmative booster words were indicative of higher credibility. Next, by investigating differences in temporal dynamics through the lens of collective attention, I demonstrate that recurring attentional bursts are correlated with lower credible events. These results provide the basis for addressing the online misinformation ecosystem. They also open avenues for future research in designing interventions aimed at controlling the spread of false information or cautioning social media users to be skeptical about an evolving topic’s veracity, ultimately raising an individual’s capacity to assess the credibility of content shared on social media.Ph.D