127,749 research outputs found
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval
While there have been many proposals on making AI algorithms explainable, few
have attempted to evaluate the impact of AI-generated explanations on human
performance in conducting human-AI collaborative tasks. To bridge the gap, we
propose a Twenty-Questions style collaborative image retrieval game,
Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy
of explanations (visual evidence or textual justification) in the context of
Visual Question Answering (VQA). In our proposed ExAG, a human user needs to
guess a secret image picked by the VQA agent by asking natural language
questions to it. We show that overall, when AI explains its answers, users
succeed more often in guessing the secret image correctly. Notably, a few
correct explanations can readily improve human performance when VQA answers are
mostly incorrect as compared to no-explanation games. Furthermore, we also show
that while explanations rated as "helpful" significantly improve human
performance, "incorrect" and "unhelpful" explanations can degrade performance
as compared to no-explanation games. Our experiments, therefore, demonstrate
that ExAG is an effective means to evaluate the efficacy of AI-generated
explanations on a human-AI collaborative task.Comment: 2019 AAAI Conference on Human Computation and Crowdsourcin
Towards Question-based Recommender Systems
Conversational and question-based recommender systems have gained increasing
attention in recent years, with users enabled to converse with the system and
better control recommendations. Nevertheless, research in the field is still
limited, compared to traditional recommender systems. In this work, we propose
a novel Question-based recommendation method, Qrec, to assist users to find
items interactively, by answering automatically constructed and algorithmically
chosen questions. Previous conversational recommender systems ask users to
express their preferences over items or item facets. Our model, instead, asks
users to express their preferences over descriptive item features. The model is
first trained offline by a novel matrix factorization algorithm, and then
iteratively updates the user and item latent factors online by a closed-form
solution based on the user answers. Meanwhile, our model infers the underlying
user belief and preferences over items to learn an optimal question-asking
strategy by using Generalized Binary Search, so as to ask a sequence of
questions to the user. Our experimental results demonstrate that our proposed
matrix factorization model outperforms the traditional Probabilistic Matrix
Factorization model. Further, our proposed Qrec model can greatly improve the
performance of state-of-the-art baselines, and it is also effective in the case
of cold-start user and item recommendations.Comment: accepted by SIGIR 202
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
This paper provides a unified account of two schools of thinking in
information retrieval modelling: the generative retrieval focusing on
predicting relevant documents given a query, and the discriminative retrieval
focusing on predicting relevancy given a query-document pair. We propose a game
theoretical minimax game to iteratively optimise both models. On one hand, the
discriminative model, aiming to mine signals from labelled and unlabelled data,
provides guidance to train the generative model towards fitting the underlying
relevance distribution over documents given the query. On the other hand, the
generative model, acting as an attacker to the current discriminative model,
generates difficult examples for the discriminative model in an adversarial way
by minimising its discrimination objective. With the competition between these
two models, we show that the unified framework takes advantage of both schools
of thinking: (i) the generative model learns to fit the relevance distribution
over documents via the signals from the discriminative model, and (ii) the
discriminative model is able to exploit the unlabelled data selected by the
generative model to achieve a better estimation for document ranking. Our
experimental results have demonstrated significant performance gains as much as
23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of
applications including web search, item recommendation, and question answering.Comment: 12 pages; appendix adde
Student questioning : a componential analysis
This article reviews the literature on student questioning, organized through a modified version of Dillon's (1988a, 1990) componential model of questioning. Special attention is given to the properties of assumptions, questions, and answers. Each of these main elements are the result of certain actions of the questioner, which are described. Within this framework a variety of aspects of questioning are highlighted. One focus of the article is individual differences in question asking. The complex interactions between students' personal characteristics, social factors, and questioning are examined. In addition, a number of important but neglected topics for research are identified. Together, the views that are presented should deepen our understanding of student questioning
Adversarial Sampling and Training for Semi-Supervised Information Retrieval
Ad-hoc retrieval models with implicit feedback often have problems, e.g., the
imbalanced classes in the data set. Too few clicked documents may hurt
generalization ability of the models, whereas too many non-clicked documents
may harm effectiveness of the models and efficiency of training. In addition,
recent neural network-based models are vulnerable to adversarial examples due
to the linear nature in them. To solve the problems at the same time, we
propose an adversarial sampling and training framework to learn ad-hoc
retrieval models with implicit feedback. Our key idea is (i) to augment clicked
examples by adversarial training for better generalization and (ii) to obtain
very informational non-clicked examples by adversarial sampling and training.
Experiments are performed on benchmark data sets for common ad-hoc retrieval
tasks such as Web search, item recommendation, and question answering.
Experimental results indicate that the proposed approaches significantly
outperform strong baselines especially for high-ranked documents, and they
outperform IRGAN in NDCG@5 using only 5% of labeled data for the Web search
task.Comment: Published in WWW 201
Predictive User Modeling with Actionable Attributes
Different machine learning techniques have been proposed and used for
modeling individual and group user needs, interests and preferences. In the
traditional predictive modeling instances are described by observable
variables, called attributes. The goal is to learn a model for predicting the
target variable for unseen instances. For example, for marketing purposes a
company consider profiling a new user based on her observed web browsing
behavior, referral keywords or other relevant information. In many real world
applications the values of some attributes are not only observable, but can be
actively decided by a decision maker. Furthermore, in some of such applications
the decision maker is interested not only to generate accurate predictions, but
to maximize the probability of the desired outcome. For example, a direct
marketing manager can choose which type of a special offer to send to a client
(actionable attribute), hoping that the right choice will result in a positive
response with a higher probability. We study how to learn to choose the value
of an actionable attribute in order to maximize the probability of a desired
outcome in predictive modeling. We emphasize that not all instances are equally
sensitive to changes in actions. Accurate choice of an action is critical for
those instances, which are on the borderline (e.g. users who do not have a
strong opinion one way or the other). We formulate three supervised learning
approaches for learning to select the value of an actionable attribute at an
instance level. We also introduce a focused training procedure which puts more
emphasis on the situations where varying the action is the most likely to take
the effect. The proof of concept experimental validation on two real-world case
studies in web analytics and e-learning domains highlights the potential of the
proposed approaches
RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests
Various forms of Peer-Learning Environments are increasingly being used in
post-secondary education, often to help build repositories of student generated
learning objects. However, large classes can result in an extensive repository,
which can make it more challenging for students to search for suitable objects
that both reflect their interests and address their knowledge gaps. Recommender
Systems for Technology Enhanced Learning (RecSysTEL) offer a potential solution
to this problem by providing sophisticated filtering techniques to help
students to find the resources that they need in a timely manner. Here, a new
RecSysTEL for Recommendation in Peer-Learning Environments (RiPLE) is
presented. The approach uses a collaborative filtering algorithm based upon
matrix factorization to create personalized recommendations for individual
students that address their interests and their current knowledge gaps. The
approach is validated using both synthetic and real data sets. The results are
promising, indicating RiPLE is able to provide sensible personalized
recommendations for both regular and cold-start users under reasonable
assumptions about parameters and user behavior.Comment: 25 pages, 7 figures. The paper is accepted for publication in the
Journal of Educational Data Minin
PeerWise - The Marmite of Veterinary Student Learning
PeerWise is a free online student-centred collaborative learning tool with which students anonymously
author, answer, and evaluate multiple choice questions (MCQs). Features such as commenting on questions,
rating questions and comments, and appearing on leaderboards, can encourage healthy competition, engage
students in reflection and debate, and enhance their communication skills. PeerWise has been used in diverse
subject areas but never previously in Veterinary Medicine. The Veterinary undergraduates at the University of
Glasgow are a distinct cohort; academically gifted and often highly strategic in their learning due to time
pressures and volume of course material. In 2010-11 we introduced PeerWise into 1st year Veterinary
Biomolecular Sciences in the Glasgow Bachelor of Veterinary Medicine and Surgery programme. To scaffold
PeerWise use, a short interactive session introduced students to the tool and to the basic principles of good MCQ
authorship. Students were asked to author four and answer forty MCQs throughout the academic year.
Participation was encouraged by an allocation of up to 5% of the final year mark and inclusion of studentauthored
questions in the first summative examination. Our analysis focuses on engagement of the class with the\ud
tool and their perceptions of its use. All 141 students in the class engaged with PeerWise and the majority
contributed beyond that which was stipulated. Student engagement with PeerWise prior to a summative exam
was positively correlated to exam score, yielding a relationship that was highly significant (p<0.001). Student
perceptions of PeerWise were predominantly positive with explicit recognition of its value as a learning and
revision tool, and more than two thirds of the class in agreement that question authoring and answering
reinforced their learning. There was clear polarisation of views, however, and those students who did not like
PeerWise were vociferous in their dislike, the biggest criticism being lack of moderation by staff
Finding your way into an open online learning community
Making educational materials freely available on the web is not only a noble enterprise, but also fits the call of helping people to become lifelong learners; a call which gets louder and louder every day. The world is rapidly changing, requiring us to continuously update our knowledge and skills. A problem with this approach to lifelong learning is that the materials that are made available are often both incomplete and unsuitable for independent learning in an online setting. The OpenER (Open Educational Resources) project at the Open Universiteit Nederland makes more than 20 short courses, originally developed for independent-study, freely available from the website www.opener.ou.nl. For our research we start from an envisioned online learning environment now under development. We use backcasting to select research topics that form steps from the current to the ultimate situation. The two experiments we report on here are an extension to standard forum software and the use of student notes to annotate learning materials: two small steps towards our ultimate open learning environment
- …