21,632 research outputs found
Causal Confusion in Imitation Learning
Behavioral cloning reduces policy learning to supervised learning by training
a discriminative model to predict expert actions given observations. Such
discriminative models are non-causal: the training procedure is unaware of the
causal structure of the interaction between the expert and the environment. We
point out that ignoring causality is particularly damaging because of the
distributional shift in imitation learning. In particular, it leads to a
counter-intuitive "causal misidentification" phenomenon: access to more
information can yield worse performance. We investigate how this problem
arises, and propose a solution to combat it through targeted
interventions---either environment interaction or expert queries---to determine
the correct causal model. We show that causal misidentification occurs in
several benchmark control domains as well as realistic driving settings, and
validate our solution against DAgger and other baselines and ablations.Comment: Published at NeurIPS 2019 9 pages, plus references and appendice
Seeding with Costly Network Information
We study the task of selecting nodes in a social network of size , to
seed a diffusion with maximum expected spread size, under the independent
cascade model with cascade probability . Most of the previous work on this
problem (known as influence maximization) focuses on efficient algorithms to
approximate the optimal seed set with provable guarantees, given the knowledge
of the entire network. However, in practice, obtaining full knowledge of the
network is very costly. To address this gap, we first study the achievable
guarantees using influence samples. We provide an approximation
algorithm with a tight (1-1/e){\mbox{OPT}}-\epsilon n guarantee, using
influence samples and show that this dependence on
is asymptotically optimal. We then propose a probing algorithm that queries
edges from the graph and use them to find a seed set with the
same almost tight approximation guarantee. We also provide a matching (up to
logarithmic factors) lower-bound on the required number of edges. To address
the dependence of our probing algorithm on the independent cascade probability
, we show that it is impossible to maintain the same approximation
guarantees by controlling the discrepancy between the probing and seeding
cascade probabilities. Instead, we propose to down-sample the probed edges to
match the seeding cascade probability, provided that it does not exceed that of
probing. Finally, we test our algorithms on real world data to quantify the
trade-off between the cost of obtaining more refined network information and
the benefit of the added information for guiding improved seeding strategies
Practical Attacks Against Graph-based Clustering
Graph modeling allows numerous security problems to be tackled in a general
way, however, little work has been done to understand their ability to
withstand adversarial attacks. We design and evaluate two novel graph attacks
against a state-of-the-art network-level, graph-based detection system. Our
work highlights areas in adversarial machine learning that have not yet been
addressed, specifically: graph-based clustering techniques, and a global
feature space where realistic attackers without perfect knowledge must be
accounted for (by the defenders) in order to be practical. Even though less
informed attackers can evade graph clustering with low cost, we show that some
practical defenses are possible.Comment: ACM CCS 201
- …