118 research outputs found
Autoencoding with a classifier system
Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating models of increasing size comes at the price of additional time and computational cost. Conditional computation, sparsity, and model pruning techniques can reduce these costs while maintaining performance. Learning classifier systems (LCS) are a framework for adaptively subdividing input spaces into an ensemble of simpler local approximations that together cover the domain. LCS perform conditional computation through the use of a population of individual gating/guarding components, each associated with a local approximation. This article explores the use of an LCS to adaptively decompose the input domain into a collection of small autoencoders where local solutions of different complexity may emerge. In addition to benefits in convergence time and computational cost, it is shown possible to reduce code size as well as the resulting decoder computational cost when compared with the global model equivalent
Multiform Evolution for High-Dimensional Problems with Low Effective Dimensionality
In this paper, we scale evolutionary algorithms to high-dimensional
optimization problems that deceptively possess a low effective dimensionality
(certain dimensions do not significantly affect the objective function). To
this end, an instantiation of the multiform optimization paradigm is presented,
where multiple low-dimensional counterparts of a target high-dimensional task
are generated via random embeddings. Since the exact relationship between the
auxiliary (low-dimensional) tasks and the target is a priori unknown, a
multiform evolutionary algorithm is developed for unifying all formulations
into a single multi-task setting. The resultant joint optimization enables the
target task to efficiently reuse solutions evolved across various
low-dimensional searches via cross-form genetic transfers, hence speeding up
overall convergence characteristics. To validate the overall efficacy of our
proposed algorithmic framework, comprehensive experimental studies are carried
out on well-known continuous benchmark functions as well as a set of practical
problems in the hyper-parameter tuning of machine learning models and deep
learning models in classification tasks and Predator-Prey games, respectively.Comment: 12 pages,6 figure
A Multi-Transformation Evolutionary Framework for Influence Maximization in Social Networks
Influence maximization is a crucial issue for mining the deep information of
social networks, which aims to select a seed set from the network to maximize
the number of influenced nodes. To evaluate the influence spread of a seed set
efficiently, existing studies have proposed transformations with lower
computational costs to replace the expensive Monte Carlo simulation process.
These alternate transformations, based on network prior knowledge, induce
different search behaviors with similar characteristics to various
perspectives. Specifically, it is difficult for users to determine a suitable
transformation a priori. This article proposes a multi-transformation
evolutionary framework for influence maximization (MTEFIM) with convergence
guarantees to exploit the potential similarities and unique advantages of
alternate transformations and to avoid users manually determining the most
suitable one. In MTEFIM, multiple transformations are optimized simultaneously
as multiple tasks. Each transformation is assigned an evolutionary solver.
Three major components of MTEFIM are conducted via: 1) estimating the potential
relationship across transformations based on the degree of overlap across
individuals of different populations, 2) transferring individuals across
populations adaptively according to the inter-transformation relationship, and
3) selecting the final output seed set containing all the transformation's
knowledge. The effectiveness of MTEFIM is validated on both benchmarks and
real-world social networks. The experimental results show that MTEFIM can
efficiently utilize the potentially transferable knowledge across multiple
transformations to achieve highly competitive performance compared to several
popular IM-specific methods. The implementation of MTEFIM can be accessed at
https://github.com/xiaofangxd/MTEFIM.Comment: This work has been submitted to the IEEE Computational Intelligence
Magazine for publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
A Scalable Test Problem Generator for Sequential Transfer Optimization
Sequential transfer optimization (STO), which aims to improve optimization
performance by exploiting knowledge captured from previously-solved
optimization tasks stored in a database, has been gaining increasing research
attention in recent years. However, despite significant advancements in
algorithm design, the test problems in STO are not well designed. Oftentimes,
they are either randomly assembled by other benchmark functions that have
identical optima or are generated from practical problems that exhibit limited
variations. The relationships between the optimal solutions of source and
target tasks in these problems are manually configured and thus monotonous,
limiting their ability to represent the diverse relationships of real-world
problems. Consequently, the promising results achieved by many algorithms on
these problems are highly biased and difficult to be generalized to other
problems. In light of this, we first introduce a few rudimentary concepts for
characterizing STO problems (STOPs) and present an important problem feature
overlooked in previous studies, namely similarity distribution, which
quantitatively delineates the relationship between the optima of source and
target tasks. Then, we propose general design guidelines and a problem
generator with superior extendibility. Specifically, the similarity
distribution of a problem can be systematically customized by modifying a
parameterized density function, enabling a broad spectrum of representation for
the diverse similarity relationships of real-world problems. Lastly, a
benchmark suite with 12 individual STOPs is developed using the proposed
generator, which can serve as an arena for comparing different STO algorithms.
The source code of the benchmark suite is available at
https://github.com/XmingHsueh/STOP
Convergence acceleration for multiobjective sparse reconstruction via knowledge transfer
© Springer Nature Switzerland AG 2019. Multiobjective sparse reconstruction (MOSR) methods can potentially obtain superior reconstruction performance. However, they suffer from high computational cost, especially in high-dimensional reconstruction. Furthermore, they are generally implemented independently without reusing prior knowledge from past experiences, leading to unnecessary computational consumption due to the re-exploration of similar search spaces. To address these problems, we propose a sparse-constraint knowledge transfer operator to accelerate the convergence of MOSR solvers by reusing the knowledge from past problem-solving experiences. Firstly, we introduce the deep nonlinear feature coding method to extract the feature mapping between the search of the current problem and a previously solved MOSR problem. Through this mapping, we learn a set of knowledge-induced solutions which contain the search experience of the past problem. Thereafter, we develop and apply a sparse-constraint strategy to refine these learned solutions to guarantee their sparse characteristics. Finally, we inject the refined solutions into the iteration of the current problem to facilitate the convergence. To validate the efficiency of the proposed operator, comprehensive studies on extensive simulated signal reconstruction are conducted
Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances
In today's digital world, we are confronted with an explosion of data and
models produced and manipulated by numerous large-scale IoT/cloud-based
applications. Under such settings, existing transfer evolutionary optimization
frameworks grapple with satisfying two important quality attributes, namely
scalability against a growing number of source tasks and online learning
agility against sparsity of relevant sources to the target task of interest.
Satisfying these attributes shall facilitate practical deployment of transfer
optimization to big source instances as well as simultaneously curbing the
threat of negative transfer. While applications of existing algorithms are
limited to tens of source tasks, in this paper, we take a quantum leap forward
in enabling two orders of magnitude scale-up in the number of tasks; i.e., we
efficiently handle scenarios with up to thousands of source problem instances.
We devise a novel transfer evolutionary optimization framework comprising two
co-evolving species for joint evolutions in the space of source knowledge and
in the search space of solutions to the target problem. In particular,
co-evolution enables the learned knowledge to be orchestrated on the fly,
expediting convergence in the target optimization task. We have conducted an
extensive series of experiments across a set of practically motivated discrete
and continuous optimization examples comprising a large number of source
problem instances, of which only a small fraction show source-target
relatedness. The experimental results strongly validate the efficacy of our
proposed framework with two salient features of scalability and online learning
agility.Comment: 12 pages, 5 figures, 2 tables, 2 algorithm pseudocode
Combining Lyapunov Optimization With Evolutionary Transfer Optimization for Long-Term Energy Minimization in IRS-Aided Communications
This article studies an intelligent reflecting surface (IRS)-aided communication system under the time-varying channels and stochastic data arrivals. In this system, we jointly optimize the phase-shift coefficient and the transmit power in sequential time slots to maximize the long-term energy consumption for all mobile devices while ensuring queue stability. Due to the dynamic environment, it is challenging to ensure queue stability. In addition, making real-time decisions in each short time slot also needs to be considered. To this end, we propose a method (called LETO) that combines Lyapunov optimization with evolutionary transfer optimization (ETO) to solve the above optimization problem. LETO first adopts Lyapunov optimization to decouple the long-term stochastic optimization problem into deterministic optimization problems in sequential time slots. As a result, it can ensure queue stability since the deterministic optimization problem in each time slot does not involve future information. After that, LETO develops an evolutionary transfer method to solve the optimization problem in each time slot. Specifically, we first define a metric to identify the optimization problems in past time slots similar to that in the current time slot, and then transfer their optimal solutions to construct a high-quality initial population in the current time slot. Since ETO effectively accelerates the search, we can make real-time decisions in each short time slot. Experimental studies verify the effectiveness of LETO by comparison with other algorithms
- …