1,982 research outputs found
A Particle Swarm Optimization-based Flexible Convolutional Auto-Encoder for Image Classification
Convolutional auto-encoders have shown their remarkable performance in
stacking to deep convolutional neural networks for classifying image data
during past several years. However, they are unable to construct the
state-of-the-art convolutional neural networks due to their intrinsic
architectures. In this regard, we propose a flexible convolutional auto-encoder
by eliminating the constraints on the numbers of convolutional layers and
pooling layers from the traditional convolutional auto-encoder. We also design
an architecture discovery method by using particle swarm optimization, which is
capable of automatically searching for the optimal architectures of the
proposed flexible convolutional auto-encoder with much less computational
resource and without any manual intervention. We use the designed architecture
optimization algorithm to test the proposed flexible convolutional auto-encoder
through utilizing one graphic processing unit card on four extensively used
image classification datasets. Experimental results show that our work in this
paper significantly outperform the peer competitors including the
state-of-the-art algorithm.Comment: Accepted by IEEE Transactions on Neural Networks and Learning
Systems, 201
Improving Performance Insensitivity of Large-scale Multiobjective Optimization via Monte Carlo Tree Search
The large-scale multiobjective optimization problem (LSMOP) is characterized
by simultaneously optimizing multiple conflicting objectives and involving
hundreds of decision variables. {Many real-world applications in engineering
fields can be modeled as LSMOPs; simultaneously, engineering applications
require insensitivity in performance.} This requirement usually means that the
results from the algorithm runs should not only be good for every run in terms
of performance but also that the performance of multiple runs should not
fluctuate too much, i.e., the algorithm shows good insensitivity. Considering
that substantial computational resources are requested for each run, it is
essential to improve upon the performance of the large-scale multiobjective
optimization algorithm, as well as the insensitivity of the algorithm. However,
existing large-scale multiobjective optimization algorithms solely focus on
improving the performance of the algorithms, leaving the insensitivity
characteristics unattended. {In this work, we propose an evolutionary algorithm
for solving LSMOPs based on Monte Carlo tree search, the so-called LMMOCTS,
which aims to improve the performance and insensitivity for large-scale
multiobjective optimization problems.} The proposed method samples the decision
variables to construct new nodes on the Monte Carlo tree for optimization and
evaluation. {It selects nodes with good evaluation for further search to reduce
the performance sensitivity caused by large-scale decision variables.} We
compare the proposed algorithm with several state-of-the-art designs on
different benchmark functions. We also propose two metrics to measure the
sensitivity of the algorithm. The experimental results confirm the
effectiveness and performance insensitivity of the proposed design for solving
large-scale multiobjective optimization problems.Comment: 12 pages, 11 figure
- …