75,269 research outputs found
Types of cost in inductive concept learning
Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth
Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions
Artificial Intelligence (AI) has been used extensively in automatic decision
making in a broad variety of scenarios, ranging from credit ratings for loans
to recommendations of movies. Traditional design guidelines for AI models focus
essentially on accuracy maximization, but recent work has shown that
economically irrational and socially unacceptable scenarios of discrimination
and unfairness are likely to arise unless these issues are explicitly
addressed. This undesirable behavior has several possible sources, such as
biased datasets used for training that may not be detected in black-box models.
After pointing out connections between such bias of AI and the problem of
induction, we focus on Popper's contributions after Hume's, which offer a
logical theory of preferences. An AI model can be preferred over others on
purely rational grounds after one or more attempts at refutation based on
accuracy and fairness. Inspired by such epistemological principles, this paper
proposes a structured approach to mitigate discrimination and unfairness caused
by bias in AI systems. In the proposed computational framework, models are
selected and enhanced after attempts at refutation. To illustrate our
discussion, we focus on hiring decision scenarios where an AI system filters in
which job applicants should go to the interview phase
Local Rule-Based Explanations of Black Box Decision Systems
The recent years have witnessed the rise of accurate but obscure decision
systems which hide the logic of their internal decision processes to the users.
The lack of explanations for the decisions of black box systems is a key
ethical issue, and a limitation to the adoption of machine learning components
in socially sensitive and safety-critical contexts. %Therefore, we need
explanations that reveals the reasons why a predictor takes a certain decision.
In this paper we focus on the problem of black box outcome explanation, i.e.,
explaining the reasons of the decision taken on a specific instance. We propose
LORE, an agnostic method able to provide interpretable and faithful
explanations. LORE first leans a local interpretable predictor on a synthetic
neighborhood generated by a genetic algorithm. Then it derives from the logic
of the local interpretable predictor a meaningful explanation consisting of: a
decision rule, which explains the reasons of the decision; and a set of
counterfactual rules, suggesting the changes in the instance's features that
lead to a different outcome. Wide experiments show that LORE outperforms
existing methods and baselines both in the quality of explanations and in the
accuracy in mimicking the black box
Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs
This paper presents a method for inducing logic programs from examples that
learns a new class of concepts called first-order decision lists, defined as
ordered lists of clauses each ending in a cut. The method, called FOIDL, is
based on FOIL (Quinlan, 1990) but employs intensional background knowledge and
avoids the need for explicit negative examples. It is particularly useful for
problems that involve rules with specific exceptions, such as learning the
past-tense of English verbs, a task widely studied in the context of the
symbolic/connectionist debate. FOIDL is able to learn concise, accurate
programs for this problem from significantly fewer examples than previous
methods (both connectionist and symbolic).Comment: See http://www.jair.org/ for any accompanying file
A survey of cost-sensitive decision tree induction algorithms
The past decade has seen a significant interest on the problem of inducing decision trees that take account of costs of misclassification and costs of acquiring the features used for decision making. This survey identifies over 50 algorithms including approaches that are direct adaptations of accuracy based methods, use genetic algorithms, use anytime methods and utilize boosting and bagging. The survey brings together these different studies and novel approaches to cost-sensitive decision tree learning, provides a useful taxonomy, a historical timeline of how the field has developed and should provide a useful reference point for future research in this field
- âŠ