28,629 research outputs found
Neural Network architectures design by Cellular Automata evolution
4th Conference of Systemics Cybernetics and Informatics. Orlando, 23-26 July 2000The design of the architecture is a crucial step in the successful application of a neural network. However, the architecture design is basically, in most cases, a human experts job. The design depends heavily on both, the expert experience and on a tedious trial-and-error process. Therefore, the development of automatic methods to determine the architecture of feedforward neural networks is a field of interest in the neural network community. These methods are generally based on search techniques, as genetic algorithms, simulated annealing or evolutionary strategies. Most of the designed methods are based on direct representation of the parameters of the network. This representation does not allow scalability, so to represent large architectures very large structures are required. In this work, an indirect constructive encoding scheme is proposed to find optimal architectures of feed-forward neural networks. This scheme is based on cellular automata representations in order to increase the scalability of the method.Publicad
Using Program Synthesis for Program Analysis
In this paper, we identify a fragment of second-order logic with restricted
quantification that is expressive enough to capture numerous static analysis
problems (e.g. safety proving, bug finding, termination and non-termination
proving, superoptimisation). We call this fragment the {\it synthesis
fragment}. Satisfiability of a formula in the synthesis fragment is decidable
over finite domains; specifically the decision problem is NEXPTIME-complete. If
a formula in this fragment is satisfiable, a solution consists of a satisfying
assignment from the second order variables to \emph{functions over finite
domains}. To concretely find these solutions, we synthesise \emph{programs}
that compute the functions. Our program synthesis algorithm is complete for
finite state programs, i.e. every \emph{function} over finite domains is
computed by some \emph{program} that we can synthesise. We can therefore use
our synthesiser as a decision procedure for the synthesis fragment of
second-order logic, which in turn allows us to use it as a powerful backend for
many program analysis tasks. To show the tractability of our approach, we
evaluate the program synthesiser on several static analysis problems.Comment: 19 pages, to appear in LPAR 2015. arXiv admin note: text overlap with
arXiv:1409.492
POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem
Most of computer science focuses on automatically solving given computational
problems. I focus on automatically inventing or discovering problems in a way
inspired by the playful behavior of animals and humans, to train a more and
more general problem solver from scratch in an unsupervised fashion. Consider
the infinite set of all computable descriptions of tasks with possibly
computable solutions. The novel algorithmic framework POWERPLAY (2011)
continually searches the space of possible pairs of new tasks and modifications
of the current problem solver, until it finds a more powerful problem solver
that provably solves all previously learned tasks plus the new one, while the
unmodified predecessor does not. Wow-effects are achieved by continually making
previously learned skills more efficient such that they require less time and
space. New skills may (partially) re-use previously learned skills. POWERPLAY's
search orders candidate pairs of tasks and solver modifications by their
conditional computational (time & space) complexity, given the stored
experience so far. The new task and its corresponding task-solving skill are
those first found and validated. The computational costs of validating new
tasks need not grow with task repertoire size. POWERPLAY's ongoing search for
novelty keeps breaking the generalization abilities of its present solver. This
is related to Goedel's sequence of increasingly powerful formal theories based
on adding formerly unprovable statements to the axioms without affecting
previously provable theorems. The continually increasing repertoire of problem
solving procedures can be exploited by a parallel search for solutions to
additional externally posed tasks. POWERPLAY may be viewed as a greedy but
practical implementation of basic principles of creativity. A first
experimental analysis can be found in separate papers [53,54].Comment: 21 pages, additional connections to previous work, references to
first experiments with POWERPLA
Conformant Planning as a Case Study of Incremental QBF Solving
We consider planning with uncertainty in the initial state as a case study of
incremental quantified Boolean formula (QBF) solving. We report on experiments
with a workflow to incrementally encode a planning instance into a sequence of
QBFs. To solve this sequence of incrementally constructed QBFs, we use our
general-purpose incremental QBF solver DepQBF. Since the generated QBFs have
many clauses and variables in common, our approach avoids redundancy both in
the encoding phase and in the solving phase. Experimental results show that
incremental QBF solving outperforms non-incremental QBF solving. Our results
are the first empirical study of incremental QBF solving in the context of
planning and motivate its use in other application domains.Comment: added reference to extended journal article; revision (camera-ready,
to appear in the proceedings of AISC 2014, volume 8884 of LNAI, Springer
Learning Moore Machines from Input-Output Traces
The problem of learning automata from example traces (but no equivalence or
membership queries) is fundamental in automata learning theory and practice. In
this paper we study this problem for finite state machines with inputs and
outputs, and in particular for Moore machines. We develop three algorithms for
solving this problem: (1) the PTAP algorithm, which transforms a set of
input-output traces into an incomplete Moore machine and then completes the
machine with self-loops; (2) the PRPNI algorithm, which uses the well-known
RPNI algorithm for automata learning to learn a product of automata encoding a
Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore
machine using PTAP extended with state merging. We prove that MooreMI has the
fundamental identification in the limit property. We also compare the
algorithms experimentally in terms of the size of the learned machine and
several notions of accuracy, introduced in this paper. Finally, we compare with
OSTIA, an algorithm that learns a more general class of transducers, and find
that OSTIA generally does not learn a Moore machine, even when fed with a
characteristic sample
Learning programs by learning from failures
We describe an inductive logic programming (ILP) approach called learning
from failures. In this approach, an ILP system (the learner) decomposes the
learning problem into three separate stages: generate, test, and constrain. In
the generate stage, the learner generates a hypothesis (a logic program) that
satisfies a set of hypothesis constraints (constraints on the syntactic form of
hypotheses). In the test stage, the learner tests the hypothesis against
training examples. A hypothesis fails when it does not entail all the positive
examples or entails a negative example. If a hypothesis fails, then, in the
constrain stage, the learner learns constraints from the failed hypothesis to
prune the hypothesis space, i.e. to constrain subsequent hypothesis generation.
For instance, if a hypothesis is too general (entails a negative example), the
constraints prune generalisations of the hypothesis. If a hypothesis is too
specific (does not entail all the positive examples), the constraints prune
specialisations of the hypothesis. This loop repeats until either (i) the
learner finds a hypothesis that entails all the positive and none of the
negative examples, or (ii) there are no more hypotheses to test. We introduce
Popper, an ILP system that implements this approach by combining answer set
programming and Prolog. Popper supports infinite problem domains, reasoning
about lists and numbers, learning textually minimal programs, and learning
recursive programs. Our experimental results on three domains (toy game
problems, robot strategies, and list transformations) show that (i) constraints
drastically improve learning performance, and (ii) Popper can outperform
existing ILP systems, both in terms of predictive accuracies and learning
times.Comment: Accepted for the machine learning journa
A recurrent neural network for classification of unevenly sampled variable stars
Astronomical surveys of celestial sources produce streams of noisy time
series measuring flux versus time ("light curves"). Unlike in many other
physical domains, however, large (and source-specific) temporal gaps in data
arise naturally due to intranight cadence choices as well as diurnal and
seasonal constraints. With nightly observations of millions of variable stars
and transients from upcoming surveys, efficient and accurate discovery and
classification techniques on noisy, irregularly sampled data must be employed
with minimal human-in-the-loop involvement. Machine learning for inference
tasks on such data traditionally requires the laborious hand-coding of
domain-specific numerical summaries of raw data ("features"). Here we present a
novel unsupervised autoencoding recurrent neural network (RNN) that makes
explicit use of sampling times and known heteroskedastic noise properties. When
trained on optical variable star catalogs, this network produces supervised
classification models that rival other best-in-class approaches. We find that
autoencoded features learned on one time-domain survey perform nearly as well
when applied to another survey. These networks can continue to learn from new
unlabeled observations and may be used in other unsupervised tasks such as
forecasting and anomaly detection.Comment: 23 pages, 14 figures. The published version is at Nature Astronomy
(https://www.nature.com/articles/s41550-017-0321-z). Source code for models,
experiments, and figures at
https://github.com/bnaul/IrregularTimeSeriesAutoencoderPaper (Zenodo Code
DOI: 10.5281/zenodo.1045560
- …