1,001 research outputs found
Service in Your Neighborhood: Fairness in Center Location
When selecting locations for a set of centers, standard clustering algorithms may place unfair burden on some individuals and neighborhoods. We formulate a fairness concept that takes local population densities into account. In particular, given k centers to locate and a population of size n, we define the "neighborhood radius" of an individual i as the minimum radius of a ball centered at i that contains at least n/k individuals. Our objective is to ensure that each individual has a center that is within at most a small constant factor of her neighborhood radius.
We present several theoretical results: We show that optimizing this factor is NP-hard; we give an approximation algorithm that guarantees a factor of at most 2 in all metric spaces; and we prove matching lower bounds in some metric spaces. We apply a variant of this algorithm to real-world address data, showing that it is quite different from standard clustering algorithms and outperforms them on our objective function and balances the load between centers more evenly
Recommended from our members
Depth uncertainty in neural networks
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines
From Anecdotal Evidence to Quantitative Evaluation Methods:A Systematic Review on Evaluating Explainable AI
The rising popularity of explainable artificial intelligence (XAI) to
understand high-performing black boxes, also raised the question of how to
evaluate explanations of machine learning (ML) models. While interpretability
and explainability are often presented as a subjectively validated binary
property, we consider it a multi-faceted concept. We identify 12 conceptual
properties, such as Compactness and Correctness, that should be evaluated for
comprehensively assessing the quality of an explanation. Our so-called Co-12
properties serve as categorization scheme for systematically reviewing the
evaluation practice of more than 300 papers published in the last 7 years at
major AI and ML conferences that introduce an XAI method. We find that 1 in 3
papers evaluate exclusively with anecdotal evidence, and 1 in 5 papers evaluate
with users. We also contribute to the call for objective, quantifiable
evaluation methods by presenting an extensive overview of quantitative XAI
evaluation methods. This systematic collection of evaluation methods provides
researchers and practitioners with concrete tools to thoroughly validate,
benchmark and compare new and existing XAI methods. This also opens up
opportunities to include quantitative metrics as optimization criteria during
model training in order to optimize for accuracy and interpretability
simultaneously.Comment: Link to website added: https://utwente-dmb.github.io/xai-papers
Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
In this paper, we present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over
multi-hop question and is capable to capture different levels of granularity
using the nature structure(i.e., paragraphs, questions, sentences and entities)
of documents. The reasoning process is convert to node classify task(i.e.,
paragraph nodes and sentences nodes). The second stage is a language model
fine-tuning task. In a word, stage one use graph neural network to select and
concatenate support sentences as one paragraph, and stage two find the answer
span in language model fine-tuning paradigm.Comment: the experience result is not as good as I excep
Pretraining in Deep Reinforcement Learning: A Survey
The past few years have seen rapid progress in combining reinforcement
learning (RL) with deep learning. Various breakthroughs ranging from games to
robotics have spurred the interest in designing sophisticated RL algorithms and
systems. However, the prevailing workflow in RL is to learn tabula rasa, which
may incur computational inefficiency. This precludes continuous deployment of
RL algorithms and potentially excludes researchers without large-scale
computing resources. In many other areas of machine learning, the pretraining
paradigm has shown to be effective in acquiring transferable knowledge, which
can be utilized for a variety of downstream tasks. Recently, we saw a surge of
interest in Pretraining for Deep RL with promising results. However, much of
the research has been based on different experimental settings. Due to the
nature of RL, pretraining in this field is faced with unique challenges and
hence requires new design principles. In this survey, we seek to systematically
review existing works in pretraining for deep reinforcement learning, provide a
taxonomy of these methods, discuss each sub-field, and bring attention to open
problems and future directions
Graph Out-of-Distribution Generalization with Controllable Data Augmentation
Graph Neural Network (GNN) has demonstrated extraordinary performance in
classifying graph properties. However, due to the selection bias of training
and testing data (e.g., training on small graphs and testing on large graphs,
or training on dense graphs and testing on sparse graphs), distribution
deviation is widespread. More importantly, we often observe \emph{hybrid
structure distribution shift} of both scale and density, despite of one-sided
biased data partition. The spurious correlations over hybrid distribution
deviation degrade the performance of previous GNN methods and show large
instability among different datasets. To alleviate this problem, we propose
\texttt{OOD-GMixup} to jointly manipulate the training distribution with
\emph{controllable data augmentation} in metric space. Specifically, we first
extract the graph rationales to eliminate the spurious correlations due to
irrelevant information. Secondly, we generate virtual samples with perturbation
on graph rationale representation domain to obtain potential OOD training
samples. Finally, we propose OOD calibration to measure the distribution
deviation of virtual samples by leveraging Extreme Value Theory, and further
actively control the training distribution by emphasizing the impact of virtual
OOD samples. Extensive studies on several real-world datasets on graph
classification demonstrate the superiority of our proposed method over
state-of-the-art baselines.Comment: Under revie
- …