60 research outputs found
A Study of AI Population Dynamics with Million-agent Reinforcement Learning
We conduct an empirical study on discovering the ordered collective dynamics
obtained by a population of intelligence agents, driven by million-agent
reinforcement learning. Our intention is to put intelligent agents into a
simulated natural context and verify if the principles developed in the real
world could also be used in understanding an artificially-created intelligent
population. To achieve this, we simulate a large-scale predator-prey world,
where the laws of the world are designed by only the findings or logical
equivalence that have been discovered in nature. We endow the agents with the
intelligence based on deep reinforcement learning (DRL). In order to scale the
population size up to millions agents, a large-scale DRL training platform with
redesigned experience buffer is proposed. Our results show that the population
dynamics of AI agents, driven only by each agent's individual self-interest,
reveals an ordered pattern that is similar to the Lotka-Volterra model studied
in population biology. We further discover the emergent behaviors of collective
adaptations in studying how the agents' grouping behaviors will change with the
environmental resources. Both of the two findings could be explained by the
self-organization theory in nature.Comment: Full version of the paper presented at AAMAS 2018 (International
Conference on Autonomous Agents and Multiagent Systems
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
This paper provides a unified account of two schools of thinking in
information retrieval modelling: the generative retrieval focusing on
predicting relevant documents given a query, and the discriminative retrieval
focusing on predicting relevancy given a query-document pair. We propose a game
theoretical minimax game to iteratively optimise both models. On one hand, the
discriminative model, aiming to mine signals from labelled and unlabelled data,
provides guidance to train the generative model towards fitting the underlying
relevance distribution over documents given the query. On the other hand, the
generative model, acting as an attacker to the current discriminative model,
generates difficult examples for the discriminative model in an adversarial way
by minimising its discrimination objective. With the competition between these
two models, we show that the unified framework takes advantage of both schools
of thinking: (i) the generative model learns to fit the relevance distribution
over documents via the signals from the discriminative model, and (ii) the
discriminative model is able to exploit the unlabelled data selected by the
generative model to achieve a better estimation for document ranking. Our
experimental results have demonstrated significant performance gains as much as
23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of
applications including web search, item recommendation, and question answering.Comment: 12 pages; appendix adde
A General Recipe for Likelihood-free Bayesian Optimization
The acquisition function, a critical component in Bayesian optimization (BO),
can often be written as the expectation of a utility function under a surrogate
model. However, to ensure that acquisition functions are tractable to optimize,
restrictions must be placed on the surrogate model and utility function. To
extend BO to a broader class of models and utilities, we propose
likelihood-free BO (LFBO), an approach based on likelihood-free inference. LFBO
directly models the acquisition function without having to separately perform
inference with a probabilistic surrogate model. We show that computing the
acquisition function in LFBO can be reduced to optimizing a weighted
classification problem, where the weights correspond to the utility being
chosen. By choosing the utility function for expected improvement (EI), LFBO
outperforms various state-of-the-art black-box optimization methods on several
real-world optimization problems. LFBO can also effectively leverage composite
structures of the objective function, which further improves its regret by
several orders of magnitude.Comment: ICML 2022. This version fixes a typo in eq 3
- …