1,738 research outputs found
Negatively Correlated Search
Evolutionary Algorithms (EAs) have been shown to be powerful tools for
complex optimization problems, which are ubiquitous in both communication and
big data analytics. This paper presents a new EA, namely Negatively Correlated
Search (NCS), which maintains multiple individual search processes in parallel
and models the search behaviors of individual search processes as probability
distributions. NCS explicitly promotes negatively correlated search behaviors
by encouraging differences among the probability distributions (search
behaviors). By this means, individual search processes share information and
cooperate with each other to search diverse regions of a search space, which
makes NCS a promising method for non-convex optimization. The cooperation
scheme of NCS could also be regarded as a novel diversity preservation scheme
that, different from other existing schemes, directly promotes diversity at the
level of search behaviors rather than merely trying to maintain diversity among
candidate solutions. Empirical studies showed that NCS is competitive to
well-established search methods in the sense that NCS achieved the best overall
performance on 20 multimodal (non-convex) continuous optimization problems. The
advantages of NCS over state-of-the-art approaches are also demonstrated with a
case study on the synthesis of unequally spaced linear antenna arrays
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
High-dimensional Black-box Optimization via Divide and Approximate Conquer
Divide and Conquer (DC) is conceptually well suited to high-dimensional
optimization by decomposing a problem into multiple small-scale sub-problems.
However, appealing performance can be seldom observed when the sub-problems are
interdependent. This paper suggests that the major difficulty of tackling
interdependent sub-problems lies in the precise evaluation of a partial
solution (to a sub-problem), which can be overwhelmingly costly and thus makes
sub-problems non-trivial to conquer. Thus, we propose an approximation
approach, named Divide and Approximate Conquer (DAC), which reduces the cost of
partial solution evaluation from exponential time to polynomial time.
Meanwhile, the convergence to the global optimum (of the original problem) is
still guaranteed. The effectiveness of DAC is demonstrated empirically on two
sets of non-separable high-dimensional problems.Comment: 7 pages, 2 figures, conferenc
Kernel Truncated Regression Representation for Robust Subspace Clustering
Subspace clustering aims to group data points into multiple clusters of which
each corresponds to one subspace. Most existing subspace clustering approaches
assume that input data lie on linear subspaces. In practice, however, this
assumption usually does not hold. To achieve nonlinear subspace clustering, we
propose a novel method, called kernel truncated regression representation. Our
method consists of the following four steps: 1) projecting the input data into
a hidden space, where each data point can be linearly represented by other data
points; 2) calculating the linear representation coefficients of the data
representations in the hidden space; 3) truncating the trivial coefficients to
achieve robustness and block-diagonality; and 4) executing the graph cutting
operation on the coefficient matrix by solving a graph Laplacian problem. Our
method has the advantages of a closed-form solution and the capacity of
clustering data points that lie on nonlinear subspaces. The first advantage
makes our method efficient in handling large-scale datasets, and the second one
enables the proposed method to conquer the nonlinear subspace clustering
challenge. Extensive experiments on six benchmarks demonstrate the
effectiveness and the efficiency of the proposed method in comparison with
current state-of-the-art approaches.Comment: 14 page
Population-based Algorithm Portfolios with automated constituent algorithms selection
AbstractPopulation-based Algorithm Portfolios (PAP) is an appealing framework for integrating different Evolutionary Algorithms (EAs) to solve challenging numerical optimization problems. Particularly, PAP has shown significant advantages to single EAs when a number of problems need to be solved simultaneously. Previous investigation on PAP reveals that choosing appropriate constituent algorithms is crucial to the success of PAP. However, no method has been developed for this purpose. In this paper, an extended version of PAP, namely PAP based on Estimated Performance Matrix (EPM-PAP) is proposed. EPM-PAP is equipped with a novel constituent algorithms selection module, which is based on the EPM of each candidate EAs. Empirical studies demonstrate that the EPM-based selection method can successfully identify appropriate constituent EAs, and thus EPM-PAP outperformed all single EAs considered in this work
Parallel Exploration via Negatively Correlated Search
Effective exploration is a key to successful search. The recently proposed
Negatively Correlated Search (NCS) tries to achieve this by parallel
exploration, where a set of search processes are driven to be negatively
correlated so that different promising areas of the search space can be visited
simultaneously. Various applications have verified the advantages of such novel
search behaviors. Nevertheless, the mathematical understandings are still
lacking as the previous NCS was mostly devised by intuition. In this paper, a
more principled NCS is presented, explaining that the parallel exploration is
equivalent to the explicit maximization of both the population diversity and
the population solution qualities, and can be optimally obtained by partially
gradient descending both models with respect to each search process. For
empirical assessments, the reinforcement learning tasks that largely demand
exploration ability is considered. The new NCS is applied to the popular
reinforcement learning problems, i.e., playing Atari games, to directly train a
deep convolution network with 1.7 million connection weights in the
environments with uncertain and delayed rewards. Empirical results show that
the significant advantages of NCS over the compared state-of-the-art methods
can be highly owed to the effective parallel exploration ability
Strange stars with different quark mass scalings
We investigate the stability of strange quark matter and the properties of
the corresponding strange stars, within a wide range of quark mass scaling. The
calculation shows that the resulting maximum mass always lies between 1.5 solor
mass and 1.8 solor mass for all the scalings chosen here. Strange star
sequences with a linear scaling would support less gravitational mass, and a
change (increase or decrease) of the scaling around the linear scaling would
lead to a larger maximum mass. Radii invariably decrease with the mass scaling.
Then the larger the scaling, the faster the star might spin. In addition, the
variation of the scaling would cause an order of magnitude change of the strong
electric field on quark surface, which is essential to support possible crusts
of strange stars against gravity and may then have some astrophysical
implications.Comment: 5 pages, 6 figures, 1 table. accepted by M
- …