225,338 research outputs found
Evaluating Compositionality in Sentence Embeddings
An important challenge for human-like AI is compositional semantics. Recent
research has attempted to address this by using deep neural networks to learn
vector space embeddings of sentences, which then serve as input to other tasks.
We present a new dataset for one such task, `natural language inference' (NLI),
that cannot be solved using only word-level knowledge and requires some
compositionality. We find that the performance of state of the art sentence
embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We
analyze the decision rules learned by InferSent and find that they are
consistent with simple heuristics that are ecologically valid in its training
dataset. Further, we find that augmenting training with our dataset improves
test performance on our dataset without loss of performance on the original
training dataset. This highlights the importance of structured datasets in
better understanding and improving AI systems
Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions
Artificial Intelligence (AI) has been used extensively in automatic decision
making in a broad variety of scenarios, ranging from credit ratings for loans
to recommendations of movies. Traditional design guidelines for AI models focus
essentially on accuracy maximization, but recent work has shown that
economically irrational and socially unacceptable scenarios of discrimination
and unfairness are likely to arise unless these issues are explicitly
addressed. This undesirable behavior has several possible sources, such as
biased datasets used for training that may not be detected in black-box models.
After pointing out connections between such bias of AI and the problem of
induction, we focus on Popper's contributions after Hume's, which offer a
logical theory of preferences. An AI model can be preferred over others on
purely rational grounds after one or more attempts at refutation based on
accuracy and fairness. Inspired by such epistemological principles, this paper
proposes a structured approach to mitigate discrimination and unfairness caused
by bias in AI systems. In the proposed computational framework, models are
selected and enhanced after attempts at refutation. To illustrate our
discussion, we focus on hiring decision scenarios where an AI system filters in
which job applicants should go to the interview phase
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks
We propose a guided dropout regularizer for deep networks based on the
evidence of a network prediction defined as the firing of neurons in specific
paths. In this work, we utilize the evidence at each neuron to determine the
probability of dropout, rather than dropping out neurons uniformly at random as
in standard dropout. In essence, we dropout with higher probability those
neurons which contribute more to decision making at training time. This
approach penalizes high saliency neurons that are most relevant for model
prediction, i.e. those having stronger evidence. By dropping such high-saliency
neurons, the network is forced to learn alternative paths in order to maintain
loss minimization, resulting in a plasticity-like behavior, a characteristic of
human brains too. We demonstrate better generalization ability, an increased
utilization of network neurons, and a higher resilience to network compression
using several metrics over four image/video recognition benchmarks
- …
