45,416 research outputs found
Constructive spiking neural networks for simulations of neuroplasticity
Artificial neural networks are important tools in machine learning and neuroscience;
however, a difficult step in their implementation is the selection of the neural network size and
structure. This thesis develops fundamental theory on algorithms for constructing neurons in
spiking neural networks and simulations of neuroplasticity. This theory is applied in the
development of a constructive algorithm based on spike-timing- dependent plasticity (STDP) that
achieves continual one-shot learning of hidden spike patterns through neuron construction.
The theoretical developments in this thesis begin with the proposal of a set of definitions of
the fundamental components of constructive neural networks. Disagreement in terminology across the
literature and a lack of clear definitions and requirements for constructive neural networks is a
factor in the poor visibility and fragmentation of research. The proposed definitions are used as
the basis for a generalised methodology for decomposing constructive neural networks into
components to perform comparisons, design and analysis.
Spiking neuron models are uncommon in constructive neural network literature; however, spiking
neurons are common in simulated studies in neuroscience. Spike- timing-dependent construction is
proposed as a distinct class of constructive algorithm for spiking neural networks. Past algorithms
that perform spike-timing-dependent construction are decomposed into defined components for a
detailed critical comparison and found to have limited applicability in simulations of biological
neural networks.
This thesis develops concepts and principles for designing constructive algorithms that are
compatible with simulations of biological neural networks. Simulations often have orders of
magnitude fewer neurons than related biological neural systems; there- fore, the neurons in a
simulation may be assumed to be a selection or subset of a larger neural system with many neurons
not simulated. Neuron construction and pruning may therefore be reinterpreted as the transfer of
neurons between sets of simulated neurons and hypothetical neurons in the neural system.
Constructive algorithms with a functional equivalence to transferring neurons between sets allow
simulated neural networks to maintain biological plausibility while changing size.
The components of a novel constructive algorithm are incrementally developed from the principles
for biological plausibility. First, processes for calculating new synapse weights from observed
simulation activity and estimates of past STDP are developed and analysed. Second, a method for
predicting postsynaptic spike times for synapse weight calculations through the simulation of a proxy for hypothetical neurons is developed. Finally, spike-dependent conditions for neuron construction and pruning are developed and
the processes are combined in a constructive algorithm for simulations of STDP.
Repeating hidden spike patterns can be detected by neurons tuned through STDP; this result is
reproduced in STDP simulations with neuron construction. Tuned neurons become unresponsive to other
activity, preventing detuning but also preventing neurons from learning new spike patterns.
Continual learning is demonstrated through neuron construction with immediate detection of new
spike patterns from one-shot predictions of STDP convergence.
Future research may investigate applications of the developed constructive algorithm in
neuroscience and machine learning. The developed theory on constructive neural networks and
concepts of selective simulation of neurons also provide new directions for future research.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201
Design of artificial neural networks based on genetic algorithms to forecast time series
In this work an initial approach to design Artificial Neural Networks to forecast time series is tackle, and the automatic process to design is carried out by a Genetic Algorithm. A key issue for these kinds of approaches is what information is included in the chromosome that represents an Artificial Neural Network. There are two principal ideas about this question: first, the chromosome contains information about parameters of the topology, architecture, learning parameters, etc. of the Artificial Neural Network, i.e. Direct Encoding Scheme; second, the chromosome contains the necessary information so that a constructive method gives rise to an Artificial Neural Network topology (or architecture), i.e. Indirect Encoding Scheme. The results for a Direct Encoding Scheme (in order to compare with Indirect Encoding Schemes developed in future works) to design Artificial Neural Networks for NN3 Forecasting Time Series Competition are shown
How Much Information is in a Jet?
Machine learning techniques are increasingly being applied toward data
analyses at the Large Hadron Collider, especially with applications for
discrimination of jets with different originating particles. Previous studies
of the power of machine learning to jet physics has typically employed image
recognition, natural language processing, or other algorithms that have been
extensively developed in computer science. While these studies have
demonstrated impressive discrimination power, often exceeding that of
widely-used observables, they have been formulated in a non-constructive manner
and it is not clear what additional information the machines are learning. In
this paper, we study machine learning for jet physics constructively,
expressing all of the information in a jet onto sets of observables that
completely and minimally span N-body phase space. For concreteness, we study
the application of machine learning for discrimination of boosted, hadronic
decays of Z bosons from jets initiated by QCD processes. Our results
demonstrate that the information in a jet that is useful for discrimination
power of QCD jets from Z bosons is saturated by only considering observables
that are sensitive to 4-body (8 dimensional) phase space.Comment: 14 pages + appendices, 10 figures; v2: JHEP version, updated neural
network, included deeper network and boosted decision tree result
Evolutionary fuzzy system for architecture control in a constructive neural network
This work describes an evolutionary system to control the growth of a constructive neural network for autonomous navigation. A classifier system generates Takagi-Sugeno fuzzy rules and controls the architecture of a constructive neural network. The performance of the mobile robot guides the evolutionary learning mechanism. Experiments show the efficiency of the classifier fuzzy system for analyzing if it is worth inserting a new neuron into the architecture
- …