7,457 research outputs found
Model Creation and Equivalence Proofs of Cellular Automata and Artificial Neural Networks
Computational methods and mathematical models have invaded arguably every
scientific discipline forming its own field of research called computational
science. Mathematical models are the theoretical foundation of computational
science. Since Newton's time, differential equations in mathematical models
have been widely and successfully used to describe the macroscopic or global
behaviour of systems. With spatially inhomogeneous, time-varying, local
element-specific, and often non-linear interactions, the dynamics of complex
systems is in contrast more efficiently described by local rules and thus in an
algorithmic and local or microscopic manner. The theory of mathematical
modelling taking into account these characteristics of complex systems has to
be established still. We recently presented a so-called allagmatic method
including a system metamodel to provide a framework for describing, modelling,
simulating, and interpreting complex systems. Implementations of cellular
automata and artificial neural networks were described and created with that
method. Guidance from philosophy were helpful in these first studies focusing
on programming and feasibility. A rigorous mathematical formalism, however, is
still missing. This would not only more precisely describe and define the
system metamodel, it would also further generalise it and with that extend its
reach to formal treatment in applied mathematics and theoretical aspects of
computational science as well as extend its applicability to other mathematical
and computational models such as agent-based models. Here, a mathematical
definition of the system metamodel is provided. Based on the presented
formalism, model creation and equivalence of cellular automata and artificial
neural networks are proved. It thus provides a formal approach for studying the
creation of mathematical models as well as their structural and operational
comparison.Comment: 13 pages, 1 tabl
Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking
This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces
We present a method to extract a weighted finite automaton (WFA) from a
recurrent neural network (RNN). Our algorithm is based on the WFA learning
algorithm by Balle and Mohri, which is in turn an extension of Angluin's
classic \lstar algorithm. Our technical novelty is in the use of
\emph{regression} methods for the so-called equivalence queries, thus
exploiting the internal state space of an RNN to prioritize counterexample
candidates. This way we achieve a quantitative/weighted extension of the recent
work by Weiss, Goldberg and Yahav that extracts DFAs. We experimentally
evaluate the accuracy, expressivity and efficiency of the extracted WFAs.Comment: AAAI 2020. We are preparing to distribute the implementatio
A New Oscillating-Error Technique for Classifiers
This paper describes a new method for reducing the error in a classifier. It
uses an error correction update that includes the very simple rule of either
adding or subtracting the error adjustment, based on whether the variable value
is currently larger or smaller than the desired value. While a traditional
neuron would sum the inputs together and then apply a function to the total,
this new method can change the function decision for each input value. This
gives added flexibility to the convergence procedure, where through a series of
transpositions, variables that are far away can continue towards the desired
value, whereas variables that are originally much closer can oscillate from one
side to the other. Tests show that the method can successfully classify some
benchmark datasets. It can also work in a batch mode, with reduced training
times and can be used as part of a neural network architecture. Some
comparisons with an earlier wave shape paper are also made
- …