3,304 research outputs found
Automata theoretic aspects of temporal behaviour and computability in logical neural networks
Imperial Users onl
Learning, Generalization, and Functional Entropy in Random Automata Networks
It has been shown \citep{broeck90:physicalreview,patarnello87:europhys} that
feedforward Boolean networks can learn to perform specific simple tasks and
generalize well if only a subset of the learning examples is provided for
learning. Here, we extend this body of work and show experimentally that random
Boolean networks (RBNs), where both the interconnections and the Boolean
transfer functions are chosen at random initially, can be evolved by using a
state-topology evolution to solve simple tasks. We measure the learning and
generalization performance, investigate the influence of the average node
connectivity , the system size , and introduce a new measure that allows
to better describe the network's learning and generalization behavior. We show
that the connectivity of the maximum entropy networks scales as a power-law of
the system size . Our results show that networks with higher average
connectivity (supercritical) achieve higher memorization and partial
generalization. However, near critical connectivity, the networks show a higher
perfect generalization on the even-odd task
Induction of Interpretable Possibilistic Logic Theories from Relational Data
The field of Statistical Relational Learning (SRL) is concerned with learning
probabilistic models from relational data. Learned SRL models are typically
represented using some kind of weighted logical formulas, which make them
considerably more interpretable than those obtained by e.g. neural networks. In
practice, however, these models are often still difficult to interpret
correctly, as they can contain many formulas that interact in non-trivial ways
and weights do not always have an intuitive meaning. To address this, we
propose a new SRL method which uses possibilistic logic to encode relational
models. Learned models are then essentially stratified classical theories,
which explicitly encode what can be derived with a given level of certainty.
Compared to Markov Logic Networks (MLNs), our method is faster and produces
considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
Recommended from our members
Towards Informed Exploration for Deep Reinforcement Learning
In this thesis, we discuss various techniques for improving exploration for deep reinforcement learning. We begin with a brief review of reinforcement learning (RL) and the fundamental v.s. exploitation trade-off. Then we review how deep RL has improved upon classical and summarize six categories of the latest exploration methods for deep RL, in the order increasing usage of prior information. We then explore representative works in three categories discuss their strengths and weaknesses. The first category, represented by Soft Q-learning, uses regularization to encourage exploration. The second category, represented by count-based via hashing, maps states to hash codes for counting and assigns higher exploration to less-encountered states. The third category utilizes hierarchy and is represented by modular architecture for RL agents to play StarCraft II. Finally, we conclude that exploration by prior knowledge is a promising research direction and suggest topics of potentially impact
- …