13,920 research outputs found
Distilling Information Reliability and Source Trustworthiness from Digital Traces
Online knowledge repositories typically rely on their users or dedicated
editors to evaluate the reliability of their content. These evaluations can be
viewed as noisy measurements of both information reliability and information
source trustworthiness. Can we leverage these noisy evaluations, often biased,
to distill a robust, unbiased and interpretable measure of both notions?
In this paper, we argue that the temporal traces left by these noisy
evaluations give cues on the reliability of the information and the
trustworthiness of the sources. Then, we propose a temporal point process
modeling framework that links these temporal traces to robust, unbiased and
interpretable notions of information reliability and source trustworthiness.
Furthermore, we develop an efficient convex optimization procedure to learn the
parameters of the model from historical traces. Experiments on real-world data
gathered from Wikipedia and Stack Overflow show that our modeling framework
accurately predicts evaluation events, provides an interpretable measure of
information reliability and source trustworthiness, and yields interesting
insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17
Social network externalities and price dispersion in online markets.
Ample empirical studies in the e-commerce literature have documented that the price dispersion in online markets is 1) as large as that in offline markets, 2) persistent across time, and 3) only partially explained by observed eretailersâ attributes. Buying on the internet market is risky to consumers. First of all, consumers and the products they purchase are separated in time. There is a delay in time between the time consumers pay and the time they receive the orders. Second, consumers and the products they purchase are separated in space. Consumers cannot physically touch or examine the products at the point of purchase. As such, online markets involve an adoption process based on the interaction of consumersâ experiences in the form of references, recommendations, word of mouth, etc. The social network externalities introduced by the interaction of consumerâs experiences reduces the risk of seller choice and allows some sellers to charge higher prices for even homogeneous products. This research aims to study online market price dispersion from the social network externalities perspective. Our model posits that consumers are risk averse and assess the risk of having a satisfactory transaction from a seller based on the two dimensions of the sellerâs social network externalities: quantity externality (i.e., the size of the sellerâs social network) and quality externality (i.e., the satisfactory transaction probability of the sellerâs social network). We further investigate the moderating effect of product value for consumers on the impact of social network externality on online market price dispersion. Our model yields several important propositions which we empirically test using data sets collected from eBay. We found that 1) both quantity externality and quality externality of social network are salient in driving online price dispersion, and 2) the salience of social network externality is stronger for purchase behavior in higher value product categories.network externalities, price dispersion, online markets, word of mouth
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
ILR Research in Progress 2011-12
The production of scholarly research continues to be one of the primary missions of the ILR School. During a typical academic year, ILR faculty members published or had accepted for publication over 25 books, edited volumes, and monographs, 170 articles and chapters in edited volumes, numerous book reviews. In addition, a large number of manuscripts were submitted for publication, presented at professional association meetings, or circulated in working paper form. Our faculty's research continues to find its way into the very best industrial relations, social science and statistics journals.Research_in_Progress_2011_12.pdf: 46 downloads, before Oct. 1, 2020
Dynamics of Information Diffusion and Social Sensing
Statistical inference using social sensors is an area that has witnessed
remarkable progress and is relevant in applications including localizing events
for targeted advertising, marketing, localization of natural disasters and
predicting sentiment of investors in financial markets. This chapter presents a
tutorial description of four important aspects of sensing-based information
diffusion in social networks from a communications/signal processing
perspective. First, diffusion models for information exchange in large scale
social networks together with social sensing via social media networks such as
Twitter is considered. Second, Bayesian social learning models and risk averse
social learning is considered with applications in finance and online
reputation systems. Third, the principle of revealed preferences arising in
micro-economics theory is used to parse datasets to determine if social sensors
are utility maximizers and then determine their utility functions. Finally, the
interaction of social sensors with YouTube channel owners is studied using time
series analysis methods. All four topics are explained in the context of actual
experimental datasets from health networks, social media and psychological
experiments. Also, algorithms are given that exploit the above models to infer
underlying events based on social sensing. The overview, insights, models and
algorithms presented in this chapter stem from recent developments in network
science, economics and signal processing. At a deeper level, this chapter
considers mean field dynamics of networks, risk averse Bayesian social learning
filtering and quickest change detection, data incest in decision making over a
directed acyclic graph of social sensors, inverse optimization problems for
utility function estimation (revealed preferences) and statistical modeling of
interacting social sensors in YouTube social networks.Comment: arXiv admin note: text overlap with arXiv:1405.112
Network-based ranking in social systems: three challenges
Ranking algorithms are pervasive in our increasingly digitized societies,
with important real-world applications including recommender systems, search
engines, and influencer marketing practices. From a network science
perspective, network-based ranking algorithms solve fundamental problems
related to the identification of vital nodes for the stability and dynamics of
a complex system. Despite the ubiquitous and successful applications of these
algorithms, we argue that our understanding of their performance and their
applications to real-world problems face three fundamental challenges: (i)
Rankings might be biased by various factors; (2) their effectiveness might be
limited to specific problems; and (3) agents' decisions driven by rankings
might result in potentially vicious feedback mechanisms and unhealthy systemic
consequences. Methods rooted in network science and agent-based modeling can
help us to understand and overcome these challenges.Comment: Perspective article. 9 pages, 3 figure
Byzantine Attack and Defense in Cognitive Radio Networks: A Survey
The Byzantine attack in cooperative spectrum sensing (CSS), also known as the
spectrum sensing data falsification (SSDF) attack in the literature, is one of
the key adversaries to the success of cognitive radio networks (CRNs). In the
past couple of years, the research on the Byzantine attack and defense
strategies has gained worldwide increasing attention. In this paper, we provide
a comprehensive survey and tutorial on the recent advances in the Byzantine
attack and defense for CSS in CRNs. Specifically, we first briefly present the
preliminaries of CSS for general readers, including signal detection
techniques, hypothesis testing, and data fusion. Second, we analyze the spear
and shield relation between Byzantine attack and defense from three aspects:
the vulnerability of CSS to attack, the obstacles in CSS to defense, and the
games between attack and defense. Then, we propose a taxonomy of the existing
Byzantine attack behaviors and elaborate on the corresponding attack
parameters, which determine where, who, how, and when to launch attacks. Next,
from the perspectives of homogeneous or heterogeneous scenarios, we classify
the existing defense algorithms, and provide an in-depth tutorial on the
state-of-the-art Byzantine defense schemes, commonly known as robust or secure
CSS in the literature. Furthermore, we highlight the unsolved research
challenges and depict the future research directions.Comment: Accepted by IEEE Communications Surveys and Tutoiral
- âŠ