169,359 research outputs found
Selecting a multi-label classification method for an interactive system
International audienceInteractive classification-based systems engage users to coach learning algorithms to take into account their own individual preferences. However most of the recent interactive systems limit the users to a single-label classification, which may be not expressive enough in some organization tasks such as film classification, where a multi-label scheme is required. The objective of this paper is to compare the behaviors of 12 multi-label classification methods in an interactive framework where "good" predictions must be produced in a very short time from a very small set of multi-label training examples. Experimentations highlight important performance differences for 4 complementary evaluation measures (Log-Loss, Ranking-Loss, Learning and Prediction Times). The best results are obtained for Multi-label k Nearest Neighbours (ML-kNN), Ensemble of Classifier Chains (ECC) and Ensemble of Binary Relevance (EBR)
An Active Learning Algorithm for Ranking from Pairwise Preferences with an Almost Optimal Query Complexity
We study the problem of learning to rank from pairwise preferences, and solve
a long-standing open problem that has led to development of many heuristics but
no provable results for our particular problem. Given a set of
elements, we wish to linearly order them given pairwise preference labels. A
pairwise preference label is obtained as a response, typically from a human, to
the question "which if preferred, u or v?u,v\in V{n\choose 2}$ possibilities only. We present an active learning algorithm for
this problem, with query bounds significantly beating general (non active)
bounds for the same error guarantee, while almost achieving the information
theoretical lower bound. Our main construct is a decomposition of the input
s.t. (i) each block incurs high loss at optimum, and (ii) the optimal solution
respecting the decomposition is not much worse than the true opt. The
decomposition is done by adapting a recent result by Kenyon and Schudy for a
related combinatorial optimization problem to the query efficient setting. We
thus settle an open problem posed by learning-to-rank theoreticians and
practitioners: What is a provably correct way to sample preference labels? To
further show the power and practicality of our solution, we show how to use it
in concert with an SVM relaxation.Comment: Fixed a tiny error in theorem 3.1 statemen
Discovering a taste for the unusual: exceptional models for preference mining
Exceptional preferences mining (EPM) is a crossover between two subfields of data mining: local pattern mining and preference learning. EPM can be seen as a local pattern mining task that finds subsets of observations where some preference relations between labels significantly deviate from the norm. It is a variant of subgroup discovery, with rankings of labels as the target concept. We employ several quality measures that highlight subgroups featuring exceptional preferences, where the focus of what constitutes exceptional' varies with the quality measure: two measures look for exceptional overall ranking behavior, one measure indicates whether a particular label stands out from the rest, and a fourth measure highlights subgroups with unusual pairwise label ranking behavior. We explore a few datasets and compare with existing techniques. The results confirm that the new task EPM can deliver interesting knowledge.This research has received funding from the ECSEL Joint Undertaking, the framework programme for research and innovation Horizon 2020 (2014-2020) under Grant Agreement Number 662189-MANTIS-2014-1
Learning Personalized User Preference from Cold Start in Multi-turn Conversations
This paper presents a novel teachable conversation interaction system that is
capable of learning users preferences from cold start by gradually adapting to
personal preferences. In particular, the TAI system is able to automatically
identify and label user preference in live interactions, manage dialogue flows
for interactive teaching sessions, and reuse learned preference for preference
elicitation. We develop the TAI system by leveraging BERT encoder models to
encode both dialogue and relevant context information, and build action
prediction (AP), argument filling (AF) and named entity recognition (NER)
models to understand the teaching session. We adopt a seeker-provider
interaction loop mechanism to generate diverse dialogues from cold-start. TAI
is capable of learning user preference, which achieves 0.9122 turn level
accuracy on out-of-sample dataset, and has been successfully adopted in
production.Comment: preference, personalization, cold-start, dialogue, LLM. embeddin
Induction of Ordinal Decision Trees
This paper focuses on the problem of monotone decision trees from the point of view of the multicriteria decision aid methodology (MCDA). By taking into account the preferences of the decision maker, an attempt is made to bring closer similar research within machine learning and MCDA. The paper addresses the question how to label the leaves of a tree in a way that guarantees the monotonicity of the resulting tree. Two approaches are proposed for that purpose - dynamic and static labeling which are also compared experimentally. The paper further considers the problem of splitting criteria in the con- text of monotone decision trees. Two criteria from the literature are com- pared experimentally - the entropy criterion and the number of con criterion - in an attempt to find out which one fits better the specifics of the monotone problems and which one better handles monotonicity noise.monotone decision trees;noise;multicriteria decision aid;multicriteria sorting;ordinal classication
Learning preferences for large scale multi-label problems
Despite that the majority of machine learning approaches aim to solve binary classification problems, several real-world applications require specialized algorithms able to handle many different classes, as in the case of single-label multi-class and multi-label classification problems. The Label Ranking framework is a generalization of the above mentioned settings, which aims to map instances from the input space to a total order over the set of possible labels. However, generally these algorithms are more complex than binary ones, and their application on large-scale datasets could be untractable. The main contribution of this work is the proposal of a novel general online preference-based label ranking framework. The proposed framework is able to solve binary, multi-class, multi-label and ranking problems. A comparison with other baselines has been performed, showing effectiveness and efficiency in a real-world large-scale multi-label task
- …