127 research outputs found
Improving Reachability and Navigability in Recommender Systems
In this paper, we investigate recommender systems from a network perspective
and investigate recommendation networks, where nodes are items (e.g., movies)
and edges are constructed from top-N recommendations (e.g., related movies). In
particular, we focus on evaluating the reachability and navigability of
recommendation networks and investigate the following questions: (i) How well
do recommendation networks support navigation and exploratory search? (ii) What
is the influence of parameters, in particular different recommendation
algorithms and the number of recommendations shown, on reachability and
navigability? and (iii) How can reachability and navigability be improved in
these networks? We tackle these questions by first evaluating the reachability
of recommendation networks by investigating their structural properties.
Second, we evaluate navigability by simulating three different models of
information seeking scenarios. We find that with standard algorithms,
recommender systems are not well suited to navigation and exploration and
propose methods to modify recommendations to improve this. Our work extends
from one-click-based evaluations of recommender systems towards multi-click
analysis (i.e., sequences of dependent clicks) and presents a general,
comprehensive approach to evaluating navigability of arbitrary recommendation
networks
Detecting Memory and Structure in Human Navigation Patterns Using Markov Chain Models of Varying Order
One of the most frequently used models for understanding human navigation on
the Web is the Markov chain model, where Web pages are represented as states
and hyperlinks as probabilities of navigating from one page to another.
Predominantly, human navigation on the Web has been thought to satisfy the
memoryless Markov property stating that the next page a user visits only
depends on her current page and not on previously visited ones. This idea has
found its way in numerous applications such as Google's PageRank algorithm and
others. Recently, new studies suggested that human navigation may better be
modeled using higher order Markov chain models, i.e., the next page depends on
a longer history of past clicks. Yet, this finding is preliminary and does not
account for the higher complexity of higher order Markov chain models which is
why the memoryless model is still widely used. In this work we thoroughly
present a diverse array of advanced inference methods for determining the
appropriate Markov chain order. We highlight strengths and weaknesses of each
method and apply them for investigating memory and structure of human
navigation on the Web. Our experiments reveal that the complexity of higher
order models grows faster than their utility, and thus we confirm that the
memoryless model represents a quite practical model for human navigation on a
page level. However, when we expand our analysis to a topical level, where we
abstract away from specific page transitions to transitions between topics, we
find that the memoryless assumption is violated and specific regularities can
be observed. We report results from experiments with two types of navigational
datasets (goal-oriented vs. free form) and observe interesting structural
differences that make a strong argument for more contextual studies of human
navigation in future work
Protection from Evil and Good: The Differential Effects of Page Protection on Wikipedia Article Quality
Wikipedia, the Web's largest encyclopedia, frequently faces content disputes
or malicious users seeking to subvert its integrity. Administrators can
mitigate such disruptions by enforcing "page protection" that selectively
limits contributions to specific articles to help prevent the degradation of
content. However, this practice contradicts one of Wikipedia's fundamental
principlesthat it is open to all contributorsand may hinder further
improvement of the encyclopedia. In this paper, we examine the effect of page
protection on article quality to better understand whether and when page
protections are warranted. Using decade-long data on page protections from the
English Wikipedia, we conduct a quasi-experimental study analyzing pages that
received "requests for page protection"written appeals submitted by
Wikipedia editors to administrators to impose page protections. We match pages
that indeed received page protection with similar pages that did not and
quantify the causal effect of the interventions on a well-established measure
of article quality. Our findings indicate that the effect of page protection on
article quality depends on the characteristics of the page prior to the
intervention: high-quality articles are affected positively as opposed to
low-quality articles that are impacted negatively. Subsequent analysis suggests
that high-quality articles degrade when left unprotected, whereas low-quality
articles improve. Overall, with our study, we outline page protections on
Wikipedia and inform best practices on whether and when to protect an article.Comment: Under Review, 11 page
Integrated Copy-Paste Checking: Design and Services
The advances in technology have made academic cheating far too easy for learners. Furthermore, the World-Wide-Web has brought about a widespread culture of easy-access to all sorts of information, thus reducing the need for learners to perform diligent research or study. E-learning systems would then need to incorporate the monitoring and checking for student expressions of reading and writing, while guiding them towards learning the rightful skills. This paper describes the architecture and design of an ..
The influence of social status and network structure on consensus building in collaboration networks
- …