15,221 research outputs found
A Topic Recommender for Journalists
The way in which people acquire information on events and form their own
opinion on them has changed dramatically with the advent of social media. For many
readers, the news gathered from online sources become an opportunity to share points
of view and information within micro-blogging platforms such as Twitter, mainly
aimed at satisfying their communication needs. Furthermore, the need to deepen the
aspects related to news stimulates a demand for additional information which is often
met through online encyclopedias, such as Wikipedia. This behaviour has also
influenced the way in which journalists write their articles, requiring a careful assessment
of what actually interests the readers. The goal of this paper is to present
a recommender system, What to Write and Why, capable of suggesting to a journalist,
for a given event, the aspects still uncovered in news articles on which the
readers focus their interest. The basic idea is to characterize an event according to
the echo it receives in online news sources and associate it with the corresponding
readers’ communicative and informative patterns, detected through the analysis of
Twitter and Wikipedia, respectively. Our methodology temporally aligns the results
of this analysis and recommends the concepts that emerge as topics of interest from
Twitter and Wikipedia, either not covered or poorly covered in the published news
articles
Recommended from our members
Tracing the German Centennial Flood in the Stream of Tweets: First Lessons Learned
Social microblogging services such as Twitter result in massive streams of georeferenced messages and geolocated status updates. This real-time source of information is invaluable for many application areas, in particular for disaster detection and response scenarios. Consequently, a considerable number of works has dealt with issues of their acquisition, analysis and visualization. Most of these works not only assume an appropriate percentage of georeferenced messages that allows for detecting relevant events for a specific region and time frame, but also that these geolocations are reasonably correct in representing places and times of the underlying spatio-temporal situation. In this paper, we review these two key assumption based on the results of applying a visual analytics approach to a dataset of georeferenced Tweets from Germany over eight months witnessing several large-scale flooding situations throughout the country. Our results con rm the potential of Twitter as a distributed 'social sensor' but at the same time highlight some caveats in interpreting immediate results. To overcome these limits we explore incorporating evidence from other data sources including further social media and mobile phone network metrics to detect, confirm and refine events with respect to location and time. We summarize the lessons learned from our initial analysis by proposing recommendations and outline possible future work directions
Reading the Source Code of Social Ties
Though online social network research has exploded during the past years, not
much thought has been given to the exploration of the nature of social links.
Online interactions have been interpreted as indicative of one social process
or another (e.g., status exchange or trust), often with little systematic
justification regarding the relation between observed data and theoretical
concept. Our research aims to breach this gap in computational social science
by proposing an unsupervised, parameter-free method to discover, with high
accuracy, the fundamental domains of interaction occurring in social networks.
By applying this method on two online datasets different by scope and type of
interaction (aNobii and Flickr) we observe the spontaneous emergence of three
domains of interaction representing the exchange of status, knowledge and
social support. By finding significant relations between the domains of
interaction and classic social network analysis issues (e.g., tie strength,
dyadic interaction over time) we show how the network of interactions induced
by the extracted domains can be used as a starting point for more nuanced
analysis of online social data that may one day incorporate the normative
grammar of social interaction. Our methods finds applications in online social
media services ranging from recommendation to visual link summarization.Comment: 10 pages, 8 figures, Proceedings of the 2014 ACM conference on Web
(WebSci'14
On the Feasibility of Social Network-based Pollution Sensing in ITSs
Intense vehicular traffic is recognized as a global societal problem, with a
multifaceted influence on the quality of life of a person. Intelligent
Transportation Systems (ITS) can play an important role in combating such
problem, decreasing pollution levels and, consequently, their negative effects.
One of the goals of ITSs, in fact, is that of controlling traffic flows,
measuring traffic states, providing vehicles with routes that globally pursue
low pollution conditions. How such systems measure and enforce given traffic
states has been at the center of multiple research efforts in the past few
years. Although many different solutions have been proposed, very limited
effort has been devoted to exploring the potential of social network analysis
in such context. Social networks, in general, provide direct feedback from
people and, as such, potentially very valuable information. A post that tells,
for example, how a person feels about pollution at a given time in a given
location, could be put to good use by an environment aware ITS aiming at
minimizing contaminant emissions in residential areas. This work verifies the
feasibility of using pollution related social network feeds into ITS
operations. In particular, it concentrates on understanding how reliable such
information is, producing an analysis that confronts over 1,500,000 posts and
pollution data obtained from on-the- field sensors over a one-year span.Comment: 10 pages, 15 figures, Transaction Forma
Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples
Machine Learning has been a big success story during the AI resurgence. One
particular stand out success relates to learning from a massive amount of data.
In spite of early assertions of the unreasonable effectiveness of data, there
is increasing recognition for utilizing knowledge whenever it is available or
can be created purposefully. In this paper, we discuss the indispensable role
of knowledge for deeper understanding of content where (i) large amounts of
training data are unavailable, (ii) the objects to be recognized are complex,
(e.g., implicit entities and highly subjective content), and (iii) applications
need to use complementary or related data in multiple modalities/media. What
brings us to the cusp of rapid progress is our ability to (a) create relevant
and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP
techniques. Using diverse examples, we seek to foretell unprecedented progress
in our ability for deeper understanding and exploitation of multimodal data and
continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International
Conference on Web Intelligence (WI). arXiv admin note: substantial text
overlap with arXiv:1610.0770
Medium as King: Social Media & the Political Campaign
There is a growing need for a greater understanding of the intersection between great content, effective targeting and proper media usage in mediated communication and especially in American politics. As more campaigns move their efforts online in an attempt to reach a rapidly growing digital constituency, more content will continue to be less visible. The major quest for this study will be to challenge the long-standing idea that “content is king” which Bill Gates termed at the inception of the internet. A theoretical background of Marshall McLuhan and Kathleen Hall Jamieson will not only allow us to answer this question, but then will also allow for future researchers to build upon these concepts. This study will aim to demonstrate how the Ted Cruz presidential campaign of 2016, prior to his departure from the race, was an excellent example of the sweet spot in content creation, voter targeting and medium implementation
- …