12,155 research outputs found
What proof do we prefer? Variants of verifiability in voting
In this paper, we discuss one particular feature of Internet
voting, verifiability, against the background of scientific
literature and experiments in the Netherlands. In order
to conceptually clarify what verifiability is about, we distinguish
classical verifiability from constructive veriability in
both individual and universal verification. In classical individual
verifiability, a proof that a vote has been counted can
be given without revealing the vote. In constructive individual
verifiability, a proof is only accepted if the witness (i.e.
the vote) can be reconstructed. Analogous concepts are de-
fined for universal veriability of the tally. The RIES system
used in the Netherlands establishes constructive individual
verifiability and constructive universal verifiability,
whereas many advanced cryptographic systems described
in the scientific literature establish classical individual
verifiability and classical universal verifiability.
If systems with a particular kind of verifiability continue
to be used successfully in practice, this may influence the
way in which people are involved in elections, and their image
of democracy. Thus, the choice for a particular kind
of verifiability in an experiment may have political consequences.
We recommend making a well-informed democratic
choice for the way in which both individual and universal
verifiability should be realised in Internet voting, in
order to avoid these unconscious political side-effects of the
technology used. The safest choice in this respect, which
maintains most properties of current elections, is classical
individual verifiability combined with constructive universal
verifiability. We would like to encourage discussion
about the feasibility of this direction in scientific research
Identifying Purpose Behind Electoral Tweets
Tweets pertaining to a single event, such as a national election, can number
in the hundreds of millions. Automatically analyzing them is beneficial in many
downstream natural language applications such as question answering and
summarization. In this paper, we propose a new task: identifying the purpose
behind electoral tweets--why do people post election-oriented tweets? We show
that identifying purpose is correlated with the related phenomenon of sentiment
and emotion detection, but yet significantly different. Detecting purpose has a
number of applications including detecting the mood of the electorate,
estimating the popularity of policies, identifying key issues of contention,
and predicting the course of events. We create a large dataset of electoral
tweets and annotate a few thousand tweets for purpose. We develop a system that
automatically classifies electoral tweets as per their purpose, obtaining an
accuracy of 43.56% on an 11-class task and an accuracy of 73.91% on a 3-class
task (both accuracies well above the most-frequent-class baseline). Finally, we
show that resources developed for emotion detection are also helpful for
detecting purpose
Illuminating an Ecosystem of Partisan Websites
This paper aims to shed light on alternative news media ecosystems that are
believed to have influenced opinions and beliefs by false and/or biased news
reporting during the 2016 US Presidential Elections. We examine a large,
professionally curated list of 668 hyper-partisan websites and their
corresponding Facebook pages, and identify key characteristics that mediate the
traffic flow within this ecosystem. We uncover a pattern of new websites being
established in the run up to the elections, and abandoned after. Such websites
form an ecosystem, creating links from one website to another, and by `liking'
each others' Facebook pages. These practices are highly effective in directing
user traffic internally within the ecosystem in a highly partisan manner, with
right-leaning sites linking to and liking other right-leaning sites and
similarly left-leaning sites linking to other sites on the left, thus forming a
filter bubble amongst news producers similar to the filter bubble which has
been widely observed among consumers of partisan news. Whereas there is
activity along both left- and right-leaning sites, right-leaning sites are more
evolved, accounting for a disproportionate number of abandoned websites and
partisan internal links. We also examine demographic characteristics of
consumers of hyper-partisan news and find that some of the more populous
demographic groups in the US tend to be consumers of more right-leaning sites.Comment: Published at The Web Conference 2018 (WWW 2018). Please cite the WWW
versio
User Research of a Voting Machine: Preliminary Findings and Experiences
This paper describes a usability study of the Nedap voting machine in the Netherlands. On the day of the national elections, 566 voters participated in our study immediately after having cast their real vote. The research focused on the correspondence between voter intents and voting results, distinguishing between usability (correspondence between voter intents and voter input) and machine reliability (correspondence between voter input and machine output). For the sake of comparison, participants also cast their votes using a paper ballot.\ud
The machine reliability appeared to be 100%, indicating that, within our sample, all votes that had been cast were correctly represented in the output of the voting machine. Regarding usability, 1.4% of the participants had cast the wrong vote using the voting machine. This percentage was similar to that of the paper ballot. \ud
Practical implications as well as experiences with this type of usability testing are discussed
Vulnerability analysis of three remote voting methods
This article analyses three methods of remote voting in an uncontrolled
environment: postal voting, internet voting and hybrid voting. It breaks down
the voting process into different stages and compares their vulnerabilities
considering criteria that must be respected in any democratic vote:
confidentiality, anonymity, transparency, vote unicity and authenticity.
Whether for safety or reliability, each vulnerability is quantified by three
parameters: size, visibility and difficulty to achieve. The study concludes
that the automatisation of treatments combined with the dematerialisation of
the objects used during an election tends to substitute visible vulnerabilities
of a lesser magnitude by invisible and widespread vulnerabilities.Comment: 15 page
A rule dynamics approach to event detection in Twitter with its application to sports and politics
The increasing popularity of Twitter as social network tool for opinion expression as well as informa- tion retrieval has resulted in the need to derive computational means to detect and track relevant top- ics/events in the network. The application of topic detection and tracking methods to tweets enable users to extract newsworthy content from the vast and somehow chaotic Twitter stream. In this paper, we ap- ply our technique named Transaction-based Rule Change Mining to extract newsworthy hashtag keywords present in tweets from two different domains namely; sports (The English FA Cup 2012) and politics (US Presidential Elections 2012 and Super Tuesday 2012). Noting the peculiar nature of event dynamics in these two domains, we apply different time-windows and update rates to each of the datasets in order to study their impact on performance. The performance effectiveness results reveal that our approach is able to accurately detect and track newsworthy content. In addition, the results show that the adaptation of the time-window exhibits better performance especially on the sports dataset, which can be attributed to the usually shorter duration of football events
Detecting and Tracking the Spread of Astroturf Memes in Microblog Streams
Online social media are complementing and in some cases replacing
person-to-person social interaction and redefining the diffusion of
information. In particular, microblogs have become crucial grounds on which
public relations, marketing, and political battles are fought. We introduce an
extensible framework that will enable the real-time analysis of meme diffusion
in social media by mining, visualizing, mapping, classifying, and modeling
massive streams of public microblogging events. We describe a Web service that
leverages this framework to track political memes in Twitter and help detect
astroturfing, smear campaigns, and other misinformation in the context of U.S.
political elections. We present some cases of abusive behaviors uncovered by
our service. Finally, we discuss promising preliminary results on the detection
of suspicious memes via supervised learning based on features extracted from
the topology of the diffusion networks, sentiment analysis, and crowdsourced
annotations
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign
Until recently, social media was seen to promote democratic discourse on
social and political issues. However, this powerful communication platform has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress' investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of using trolls (malicious accounts created to
manipulate) and bots to spread misinformation and politically biased
information. In this study, we explore the effects of this manipulation
campaign, taking a closer look at users who re-shared the posts produced on
Twitter by the Russian troll accounts publicly disclosed by U.S. Congress
investigation. We collected a dataset with over 43 million election-related
posts shared on Twitter between September 16 and October 21, 2016, by about 5.7
million distinct users. This dataset included accounts associated with the
identified Russian trolls. We use label propagation to infer the ideology of
all users based on the news sources they shared. This method enables us to
classify a large number of users as liberal or conservative with precision and
recall above 90%. Conservatives retweeted Russian trolls about 31 times more
often than liberals and produced 36x more tweets. Additionally, most retweets
of troll content originated from two Southern states: Tennessee and Texas.
Using state-of-the-art bot detection techniques, we estimated that about 4.9%
and 6.2% of liberal and conservative users respectively were bots. Text
analysis on the content shared by trolls reveals that they had a mostly
conservative, pro-Trump agenda. Although an ideologically broad swath of
Twitter users was exposed to Russian Trolls in the period leading up to the
2016 U.S. Presidential election, it was mainly conservatives who helped amplify
their message
Knowledge Discovery in Online Repositories: A Text Mining Approach
Before the advent of the Internet, the newspapers were the prominent instrument of
mobilization for independence and political struggles. Since independence in Nigeria, the
political class has adopted newspapers as a medium of Political Competition and
Communication. Consequently, most political information exists in unstructured form and
hence the need to tap into it using text mining algorithm.
This paper implements a text mining algorithm on some unstructured data format in some newspapers. The algorithm involves the following natural language processing techniques: tokenization, text filtering and refinement. As a follow-up to the natural language techniques, association rule mining technique of data mining is used to extract knowledge using the Modified Generating Association Rules based on Weighting scheme (GARW).
The main contributions of the technique are that it integrates information retrieval scheme (Term Frequency Inverse Document Frequency) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) with Data Mining technique for association rules discovery. The program is applied to Pre-Election information gotten from the website of the Nigerian Guardian newspaper. The extracted association rules contained important features and described the informative news included in the documents collection when related to the concluded 2007 presidential election. The system presented useful information that could help sanitize the polity as well as protect the nascent democracy
- …