15 research outputs found
A Multi-Transformation Evolutionary Framework for Influence Maximization in Social Networks
Influence maximization is a crucial issue for mining the deep information of
social networks, which aims to select a seed set from the network to maximize
the number of influenced nodes. To evaluate the influence spread of a seed set
efficiently, existing studies have proposed transformations with lower
computational costs to replace the expensive Monte Carlo simulation process.
These alternate transformations, based on network prior knowledge, induce
different search behaviors with similar characteristics to various
perspectives. Specifically, it is difficult for users to determine a suitable
transformation a priori. This article proposes a multi-transformation
evolutionary framework for influence maximization (MTEFIM) with convergence
guarantees to exploit the potential similarities and unique advantages of
alternate transformations and to avoid users manually determining the most
suitable one. In MTEFIM, multiple transformations are optimized simultaneously
as multiple tasks. Each transformation is assigned an evolutionary solver.
Three major components of MTEFIM are conducted via: 1) estimating the potential
relationship across transformations based on the degree of overlap across
individuals of different populations, 2) transferring individuals across
populations adaptively according to the inter-transformation relationship, and
3) selecting the final output seed set containing all the transformation's
knowledge. The effectiveness of MTEFIM is validated on both benchmarks and
real-world social networks. The experimental results show that MTEFIM can
efficiently utilize the potentially transferable knowledge across multiple
transformations to achieve highly competitive performance compared to several
popular IM-specific methods. The implementation of MTEFIM can be accessed at
https://github.com/xiaofangxd/MTEFIM.Comment: This work has been submitted to the IEEE Computational Intelligence
Magazine for publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances
In today's digital world, we are confronted with an explosion of data and
models produced and manipulated by numerous large-scale IoT/cloud-based
applications. Under such settings, existing transfer evolutionary optimization
frameworks grapple with satisfying two important quality attributes, namely
scalability against a growing number of source tasks and online learning
agility against sparsity of relevant sources to the target task of interest.
Satisfying these attributes shall facilitate practical deployment of transfer
optimization to big source instances as well as simultaneously curbing the
threat of negative transfer. While applications of existing algorithms are
limited to tens of source tasks, in this paper, we take a quantum leap forward
in enabling two orders of magnitude scale-up in the number of tasks; i.e., we
efficiently handle scenarios with up to thousands of source problem instances.
We devise a novel transfer evolutionary optimization framework comprising two
co-evolving species for joint evolutions in the space of source knowledge and
in the search space of solutions to the target problem. In particular,
co-evolution enables the learned knowledge to be orchestrated on the fly,
expediting convergence in the target optimization task. We have conducted an
extensive series of experiments across a set of practically motivated discrete
and continuous optimization examples comprising a large number of source
problem instances, of which only a small fraction show source-target
relatedness. The experimental results strongly validate the efficacy of our
proposed framework with two salient features of scalability and online learning
agility.Comment: 12 pages, 5 figures, 2 tables, 2 algorithm pseudocode
Autoencoding with a classifier system
Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating models of increasing size comes at the price of additional time and computational cost. Conditional computation, sparsity, and model pruning techniques can reduce these costs while maintaining performance. Learning classifier systems (LCS) are a framework for adaptively subdividing input spaces into an ensemble of simpler local approximations that together cover the domain. LCS perform conditional computation through the use of a population of individual gating/guarding components, each associated with a local approximation. This article explores the use of an LCS to adaptively decompose the input domain into a collection of small autoencoders where local solutions of different complexity may emerge. In addition to benefits in convergence time and computational cost, it is shown possible to reduce code size as well as the resulting decoder computational cost when compared with the global model equivalent
Multitasking Evolutionary Algorithm Based on Adaptive Seed Transfer for Combinatorial Problem
Evolutionary computing (EC) is widely used in dealing with combinatorial
optimization problems (COP). Traditional EC methods can only solve a single
task in a single run, while real-life scenarios often need to solve multiple
COPs simultaneously. In recent years, evolutionary multitasking optimization
(EMTO) has become an emerging topic in the EC community. And many methods have
been designed to deal with multiple COPs concurrently through exchanging
knowledge. However, many-task optimization, cross-domain knowledge transfer,
and negative transfer are still significant challenges in this field. A new
evolutionary multitasking algorithm based on adaptive seed transfer (MTEA-AST)
is developed for multitasking COPs in this work. First, a dimension unification
strategy is proposed to unify the dimensions of different tasks. And then, an
adaptive task selection strategy is designed to capture the similarity between
the target task and other online optimization tasks. The calculated similarity
is exploited to select suitable source tasks for the target one and determine
the transfer strength. Next, a task transfer strategy is established to select
seeds from source tasks and correct unsuitable knowledge in seeds to suppress
negative transfer. Finally, the experimental results indicate that MTEA-AST can
adaptively transfer knowledge in both same-domain and cross-domain many-task
environments. And the proposed method shows competitive performance compared to
other state-of-the-art EMTOs in experiments consisting of four COPs
Inductive biases and metaknowledge representations for search-based optimization
"What I do not understand, I can still create."- H. Sayama
The following work follows closely the aforementioned bonmot. Guided by questions such as: ``How can evolutionary processes exhibit learning behavior and consolidate knowledge?ยดยด, ``What are cognitive models of problem-solving?ยดยด and ``How can we harness these altogether as computational techniques?ยดยด, we clarify within this work essentials required to implement them for metaheuristic search and optimization.We therefore look into existing models of computational problem-solvers and compare these with existing methodology in literature. Particularly, we find that the meta-learning model, which frames problem-solving in terms of domain-specific inductive biases and the arbitration thereof through means of high-level abstractions resolves outstanding issues with methodology proposed within the literature. Noteworthy, it can be also related to ongoing research on algorithm selection and configuration frameworks. We therefore look in what it means to implement such a model by first identifying inductive biases in terms of algorithm components and modeling these with density estimation techniques. And secondly, propose methodology to process metadata generated by optimization algorithms in an automated manner through means of deep pattern recognition architectures for spatio-temporal feature extraction. At last we look into an exemplary shape optimization problem which allows us to gain insight into what it means to apply our methodology to application scenarios. We end our work with a discussion on future possible directions to explore and discuss the limitations of such frameworks for system deployment