37 research outputs found
Modeling Collaboration in Academia: A Game Theoretic Approach
In this work, we aim to understand the mechanisms driving academic
collaboration. We begin by building a model for how researchers split their
effort between multiple papers, and how collaboration affects the number of
citations a paper receives, supported by observations from a large real-world
publication and citation dataset, which we call the h-Reinvestment model. Using
tools from the field of Game Theory, we study researchers' collaborative
behavior over time under this model, with the premise that each researcher
wants to maximize his or her academic success. We find analytically that there
is a strong incentive to collaborate rather than work in isolation, and that
studying collaborative behavior through a game-theoretic lens is a promising
approach to help us better understand the nature and dynamics of academic
collaboration.Comment: Presented at the 1st WWW Workshop on Big Scholarly Data (2014). 6
pages, 5 figure
Will This Paper Increase Your h-index? Scientific Impact Prediction
Scientific impact plays a central role in the evaluation of the output of
scholars, departments, and institutions. A widely used measure of scientific
impact is citations, with a growing body of literature focused on predicting
the number of citations obtained by any given publication. The effectiveness of
such predictions, however, is fundamentally limited by the power-law
distribution of citations, whereby publications with few citations are
extremely common and publications with many citations are relatively rare.
Given this limitation, in this work we instead address a related question asked
by many academic researchers in the course of writing a paper, namely: "Will
this paper increase my h-index?" Using a real academic dataset with over 1.7
million authors, 2 million papers, and 8 million citation relationships from
the premier online academic service ArnetMiner, we formalize a novel scientific
impact prediction problem to examine several factors that can drive a paper to
increase the primary author's h-index. We find that the researcher's authority
on the publication topic and the venue in which the paper is published are
crucial factors to the increase of the primary author's h-index, while the
topic popularity and the co-authors' h-indices are of surprisingly little
relevance. By leveraging relevant factors, we find a greater than 87.5%
potential predictability for whether a paper will contribute to an author's
h-index within five years. As a further experiment, we generate a
self-prediction for this paper, estimating that there is a 76% probability that
it will contribute to the h-index of the co-author with the highest current
h-index in five years. We conclude that our findings on the quantification of
scientific impact can help researchers to expand their influence and more
effectively leverage their position of "standing on the shoulders of giants."Comment: Proc. of the 8th ACM International Conference on Web Search and Data
Mining (WSDM'15
Incentives and Efficiency in Uncertain Collaborative Environments
We consider collaborative systems where users make contributions across
multiple available projects and are rewarded for their contributions in
individual projects according to a local sharing of the value produced. This
serves as a model of online social computing systems such as online Q&A forums
and of credit sharing in scientific co-authorship settings. We show that the
maximum feasible produced value can be well approximated by simple local
sharing rules where users are approximately rewarded in proportion to their
marginal contributions and that this holds even under incomplete information
about the player's abilities and effort constraints. For natural instances we
show almost 95% optimality at equilibrium. When players incur a cost for their
effort, we identify a threshold phenomenon: the efficiency is a constant
fraction of the optimal when the cost is strictly convex and decreases with the
number of players if the cost is linear
Project Games
International audienceWe consider a strategic game called project game where each agent has to choose a project among his own list of available projects. The model includes positive weights expressing the capacity of a given agent to contribute to a given project The realization of a project produces some reward that has to be allocated to the agents. The reward of a realized project is fully allocated to its contributors, according to a simple proportional rule. Existence and computational complexity of pure Nash equilibria is addressed and their efficiency is investigated according to both the utilitarian and the egalitarian social function
User Satisfaction in Competitive Sponsored Search
We present a model of competition between web search algorithms, and study
the impact of such competition on user welfare. In our model, search providers
compete for customers by strategically selecting which search results to
display in response to user queries. Customers, in turn, have private
preferences over search results and will tend to use search engines that are
more likely to display pages satisfying their demands.
Our main question is whether competition between search engines increases the
overall welfare of the users (i.e., the likelihood that a user finds a page of
interest). When search engines derive utility only from customers to whom they
show relevant results, we show that they differentiate their results, and every
equilibrium of the resulting game achieves at least half of the welfare that
could be obtained by a social planner. This bound also applies whenever the
likelihood of selecting a given engine is a convex function of the probability
that a user's demand will be satisfied, which includes natural Markovian models
of user behavior.
On the other hand, when search engines derive utility from all customers
(independent of search result relevance) and the customer demand functions are
not convex, there are instances in which the (unique) equilibrium involves no
differentiation between engines and a high degree of randomness in search
results. This can degrade social welfare by a factor of the square root of N
relative to the social optimum, where N is the number of webpages. These bad
equilibria persist even when search engines can extract only small (but
non-zero) expected revenue from dissatisfied users, and much higher revenue
from satisfied ones