31,490 research outputs found
Kernel Matching Pursuit for Large Datasets
Kernel matching pursuit is a greedy algorithm for building an approximation of a discriminant function as a linear combination of some basis functions selected from a kernel-induced dictionary. Here we propose a modification of the kernel matching pursuit algorithm that aims at making the method practical for large datasets. Starting from an approximating algorithm, the weak greedy algorithm, we introduce a stochastic method for reducing the search space at each iteration. Then we study the implications of using an approximate algorithm and we show how one can control the trade-off between the accuracy and the need for resources. Finally, we present some experiments performed on a large dataset that support our approach and illustrate its applicability
Positive Alexander Duality for Pursuit and Evasion
Considered is a class of pursuit-evasion games, in which an evader tries to
avoid detection. Such games can be formulated as the search for sections to the
complement of a coverage region in a Euclidean space over a timeline. Prior
results give homological criteria for evasion in the general case that are not
necessary and sufficient. This paper provides a necessary and sufficient
positive cohomological criterion for evasion in a general case. The principal
tools are (1) a refinement of the Cech cohomology of a coverage region with a
positive cone encoding spatial orientation, (2) a refinement of the Borel-Moore
homology of the coverage gaps with a positive cone encoding time orientation,
and (3) a positive variant of Alexander Duality. Positive cohomology decomposes
as the global sections of a sheaf of local positive cohomology over the time
axis; we show how this decomposition makes positive cohomology computable as a
linear program.Comment: 19 pages, 6 figures; improvements made throughout: e.g. positive
(co)homology generalized to arbitrary degrees; Positive Alexander Duality
generalized from homological degrees 0,1; Morse and smoothness conditions
generalized; illustrations of positive homology added. minor corrections in
proofs, notation, organization, and language made throughout. variant of
Borel-Moore homology now use
Deep Reinforcement Learning for Swarm Systems
Recently, deep reinforcement learning (RL) methods have been applied
successfully to multi-agent scenarios. Typically, these methods rely on a
concatenation of agent states to represent the information content required for
decentralized decision making. However, concatenation scales poorly to swarm
systems with a large number of homogeneous agents as it does not exploit the
fundamental properties inherent to these systems: (i) the agents in the swarm
are interchangeable and (ii) the exact number of agents in the swarm is
irrelevant. Therefore, we propose a new state representation for deep
multi-agent RL based on mean embeddings of distributions. We treat the agents
as samples of a distribution and use the empirical mean embedding as input for
a decentralized policy. We define different feature spaces of the mean
embedding using histograms, radial basis functions and a neural network learned
end-to-end. We evaluate the representation on two well known problems from the
swarm literature (rendezvous and pursuit evasion), in a globally and locally
observable setup. For the local setup we furthermore introduce simple
communication protocols. Of all approaches, the mean embedding representation
using neural network features enables the richest information exchange between
neighboring agents facilitating the development of more complex collective
strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20
- …