2,759 research outputs found
MOOCs Meet Measurement Theory: A Topic-Modelling Approach
This paper adapts topic models to the psychometric testing of MOOC students
based on their online forum postings. Measurement theory from education and
psychology provides statistical models for quantifying a person's attainment of
intangible attributes such as attitudes, abilities or intelligence. Such models
infer latent skill levels by relating them to individuals' observed responses
on a series of items such as quiz questions. The set of items can be used to
measure a latent skill if individuals' responses on them conform to a Guttman
scale. Such well-scaled items differentiate between individuals and inferred
levels span the entire range from most basic to the advanced. In practice,
education researchers manually devise items (quiz questions) while optimising
well-scaled conformance. Due to the costly nature and expert requirements of
this process, psychometric testing has found limited use in everyday teaching.
We aim to develop usable measurement models for highly-instrumented MOOC
delivery platforms, by using participation in automatically-extracted online
forum topics as items. The challenge is to formalise the Guttman scale
educational constraint and incorporate it into topic models. To favour topics
that automatically conform to a Guttman scale, we introduce a novel
regularisation into non-negative matrix factorisation-based topic modelling. We
demonstrate the suitability of our approach with both quantitative experiments
on three Coursera MOOCs, and with a qualitative survey of topic
interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201
Scalable and interpretable product recommendations via overlapping co-clustering
We consider the problem of generating interpretable recommendations by
identifying overlapping co-clusters of clients and products, based only on
positive or implicit feedback. Our approach is applicable on very large
datasets because it exhibits almost linear complexity in the input examples and
the number of co-clusters. We show, both on real industrial data and on
publicly available datasets, that the recommendation accuracy of our algorithm
is competitive to that of state-of-art matrix factorization techniques. In
addition, our technique has the advantage of offering recommendations that are
textually and visually interpretable. Finally, we examine how to implement our
technique efficiently on Graphical Processing Units (GPUs).Comment: In IEEE International Conference on Data Engineering (ICDE) 201
RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests
Various forms of Peer-Learning Environments are increasingly being used in
post-secondary education, often to help build repositories of student generated
learning objects. However, large classes can result in an extensive repository,
which can make it more challenging for students to search for suitable objects
that both reflect their interests and address their knowledge gaps. Recommender
Systems for Technology Enhanced Learning (RecSysTEL) offer a potential solution
to this problem by providing sophisticated filtering techniques to help
students to find the resources that they need in a timely manner. Here, a new
RecSysTEL for Recommendation in Peer-Learning Environments (RiPLE) is
presented. The approach uses a collaborative filtering algorithm based upon
matrix factorization to create personalized recommendations for individual
students that address their interests and their current knowledge gaps. The
approach is validated using both synthetic and real data sets. The results are
promising, indicating RiPLE is able to provide sensible personalized
recommendations for both regular and cold-start users under reasonable
assumptions about parameters and user behavior.Comment: 25 pages, 7 figures. The paper is accepted for publication in the
Journal of Educational Data Minin
Talking to the crowd: What do people react to in online discussions?
This paper addresses the question of how language use affects community
reaction to comments in online discussion forums, and the relative importance
of the message vs. the messenger. A new comment ranking task is proposed based
on community annotated karma in Reddit discussions, which controls for topic
and timing of comments. Experimental work with discussion threads from six
subreddits shows that the importance of different types of language features
varies with the community of interest
Eliciting New Wikipedia Users' Interests via Automatically Mined Questionnaires: For a Warm Welcome, Not a Cold Start
Every day, thousands of users sign up as new Wikipedia contributors. Once
joined, these users have to decide which articles to contribute to, which users
to seek out and learn from or collaborate with, etc. Any such task is a hard
and potentially frustrating one given the sheer size of Wikipedia. Supporting
newcomers in their first steps by recommending articles they would enjoy
editing or editors they would enjoy collaborating with is thus a promising
route toward converting them into long-term contributors. Standard recommender
systems, however, rely on users' histories of previous interactions with the
platform. As such, these systems cannot make high-quality recommendations to
newcomers without any previous interactions -- the so-called cold-start
problem. The present paper addresses the cold-start problem on Wikipedia by
developing a method for automatically building short questionnaires that, when
completed by a newly registered Wikipedia user, can be used for a variety of
purposes, including article recommendations that can help new editors get
started. Our questionnaires are constructed based on the text of Wikipedia
articles as well as the history of contributions by the already onboarded
Wikipedia editors. We assess the quality of our questionnaire-based
recommendations in an offline evaluation using historical data, as well as an
online evaluation with hundreds of real Wikipedia newcomers, concluding that
our method provides cohesive, human-readable questions that perform well
against several baselines. By addressing the cold-start problem, this work can
help with the sustainable growth and maintenance of Wikipedia's diverse editor
community.Comment: Accepted at the 13th International AAAI Conference on Web and Social
Media (ICWSM-2019
A Dynamic Embedding Model of the Media Landscape
Information about world events is disseminated through a wide variety of news
channels, each with specific considerations in the choice of their reporting.
Although the multiplicity of these outlets should ensure a variety of
viewpoints, recent reports suggest that the rising concentration of media
ownership may void this assumption. This observation motivates the study of the
impact of ownership on the global media landscape and its influence on the
coverage the actual viewer receives. To this end, the selection of reported
events has been shown to be informative about the high-level structure of the
news ecosystem. However, existing methods only provide a static view into an
inherently dynamic system, providing underperforming statistical models and
hindering our understanding of the media landscape as a whole.
In this work, we present a dynamic embedding method that learns to capture
the decision process of individual news sources in their selection of reported
events while also enabling the systematic detection of large-scale
transformations in the media landscape over prolonged periods of time. In an
experiment covering over 580M real-world event mentions, we show our approach
to outperform static embedding methods in predictive terms. We demonstrate the
potential of the method for news monitoring applications and investigative
journalism by shedding light on important changes in programming induced by
mergers and acquisitions, policy changes, or network-wide content diffusion.
These findings offer evidence of strong content convergence trends inside large
broadcasting groups, influencing the news ecosystem in a time of increasing
media ownership concentration
Reading the Source Code of Social Ties
Though online social network research has exploded during the past years, not
much thought has been given to the exploration of the nature of social links.
Online interactions have been interpreted as indicative of one social process
or another (e.g., status exchange or trust), often with little systematic
justification regarding the relation between observed data and theoretical
concept. Our research aims to breach this gap in computational social science
by proposing an unsupervised, parameter-free method to discover, with high
accuracy, the fundamental domains of interaction occurring in social networks.
By applying this method on two online datasets different by scope and type of
interaction (aNobii and Flickr) we observe the spontaneous emergence of three
domains of interaction representing the exchange of status, knowledge and
social support. By finding significant relations between the domains of
interaction and classic social network analysis issues (e.g., tie strength,
dyadic interaction over time) we show how the network of interactions induced
by the extracted domains can be used as a starting point for more nuanced
analysis of online social data that may one day incorporate the normative
grammar of social interaction. Our methods finds applications in online social
media services ranging from recommendation to visual link summarization.Comment: 10 pages, 8 figures, Proceedings of the 2014 ACM conference on Web
(WebSci'14
- …