5,115 research outputs found
Provably Stable Interpretable Encodings of Context Free Grammars in RNNs with a Differentiable Stack
Given a collection of strings belonging to a context free grammar (CFG) and
another collection of strings not belonging to the CFG, how might one infer the
grammar? This is the problem of grammatical inference. Since CFGs are the
languages recognized by pushdown automata (PDA), it suffices to determine the
state transition rules and stack action rules of the corresponding PDA. An
approach would be to train a recurrent neural network (RNN) to classify the
sample data and attempt to extract these PDA rules. But neural networks are not
a priori aware of the structure of a PDA and would likely require many samples
to infer this structure. Furthermore, extracting the PDA rules from the RNN is
nontrivial. We build a RNN specifically structured like a PDA, where weights
correspond directly to the PDA rules. This requires a stack architecture that
is somehow differentiable (to enable gradient-based learning) and stable (an
unstable stack will show deteriorating performance with longer strings). We
propose a stack architecture that is differentiable and that provably exhibits
orbital stability. Using this stack, we construct a neural network that
provably approximates a PDA for strings of arbitrary length. Moreover, our
model and method of proof can easily be generalized to other state machines,
such as a Turing Machine.Comment: 20 pages, 2 figure
Learning Universal Computations with Spikes
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them
Hacker Combat: A Competitive Sport from Programmatic Dueling & Cyberwarfare
The history of humanhood has included competitive activities of many
different forms. Sports have offered many benefits beyond that of
entertainment. At the time of this article, there exists not a competitive
ecosystem for cyber security beyond that of conventional capture the flag
competitions, and the like. This paper introduces a competitive framework with
a foundation on computer science, and hacking. This proposed competitive
landscape encompasses the ideas underlying information security, software
engineering, and cyber warfare. We also demonstrate the opportunity to rank,
score, & categorize actionable skill levels into tiers of capability.
Physiological metrics are analyzed from participants during gameplay. These
analyses provide support regarding the intricacies required for competitive
play, and analysis of play. We use these intricacies to build a case for an
organized competitive ecosystem. Using previous player behavior from gameplay,
we also demonstrate the generation of an artificial agent purposed with
gameplay at a competitive level
- …