155,453 research outputs found
Metaheuristic Algorithms for Convolution Neural Network
A typical modern optimization technique is usually either heuristic or
metaheuristic. This technique has managed to solve some optimization problems
in the research area of science, engineering, and industry. However,
implementation strategy of metaheuristic for accuracy improvement on
convolution neural networks (CNN), a famous deep learning method, is still
rarely investigated. Deep learning relates to a type of machine learning
technique, where its aim is to move closer to the goal of artificial
intelligence of creating a machine that could successfully perform any
intellectual tasks that can be carried out by a human. In this paper, we
propose the implementation strategy of three popular metaheuristic approaches,
that is, simulated annealing, differential evolution, and harmony search, to
optimize CNN. The performances of these metaheuristic methods in optimizing CNN
on classifying MNIST and CIFAR dataset were evaluated and compared.
Furthermore, the proposed methods are also compared with the original CNN.
Although the proposed methods show an increase in the computation time, their
accuracy has also been improved (up to 7.14 percent).Comment: Article ID 1537325, 13 pages. Received 29 January 2016; Revised 15
April 2016; Accepted 10 May 2016. Academic Editor: Martin Hagan. in Hindawi
Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions
A fundamental aspect of learning in biological neural networks is the
plasticity property which allows them to modify their configurations during
their lifetime. Hebbian learning is a biologically plausible mechanism for
modeling the plasticity property in artificial neural networks (ANNs), based on
the local interactions of neurons. However, the emergence of a coherent global
learning behavior from local Hebbian plasticity rules is not very well
understood. The goal of this work is to discover interpretable local Hebbian
learning rules that can provide autonomous global learning. To achieve this, we
use a discrete representation to encode the learning rules in a finite search
space. These rules are then used to perform synaptic changes, based on the
local interactions of the neurons. We employ genetic algorithms to optimize
these rules to allow learning on two separate tasks (a foraging and a
prey-predator scenario) in online lifetime learning settings. The resulting
evolved rules converged into a set of well-defined interpretable types, that
are thoroughly discussed. Notably, the performance of these rules, while
adapting the ANNs during the learning tasks, is comparable to that of offline
learning methods such as hill climbing.Comment: Evolutionary Computation Journa
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
Anytime Stereo Image Depth Estimation on Mobile Devices
Many applications of stereo depth estimation in robotics require the
generation of accurate disparity maps in real time under significant
computational constraints. Current state-of-the-art algorithms force a choice
between either generating accurate mappings at a slow pace, or quickly
generating inaccurate ones, and additionally these methods typically require
far too many parameters to be usable on power- or memory-constrained devices.
Motivated by these shortcomings, we propose a novel approach for disparity
prediction in the anytime setting. In contrast to prior work, our end-to-end
learned approach can trade off computation and accuracy at inference time.
Depth estimation is performed in stages, during which the model can be queried
at any time to output its current best estimate. Our final model can process
1242375 resolution images within a range of 10-35 FPS on an NVIDIA
Jetson TX2 module with only marginal increases in error -- using two orders of
magnitude fewer parameters than the most competitive baseline. The source code
is available at https://github.com/mileyan/AnyNet .Comment: Accepted by ICRA201
- …