83,290 research outputs found
A new evolutionary search strategy for global optimization of high-dimensional problems
Global optimization of high-dimensional problems in practical applications remains a major challenge to the research community of evolutionary computation. The weakness of randomization-based evolutionary algorithms in searching high-dimensional spaces is demonstrated in this paper. A new strategy, SP-UCI is developed to treat complexity caused by high dimensionalities. This strategy features a slope-based searching kernel and a scheme of maintaining the particle population's capability of searching over the full search space. Examinations of this strategy on a suite of sophisticated composition benchmark functions demonstrate that SP-UCI surpasses two popular algorithms, particle swarm optimizer (PSO) and differential evolution (DE), on high-dimensional problems. Experimental results also corroborate the argument that, in high-dimensional optimization, only problems with well-formative fitness landscapes are solvable, and slope-based schemes are preferable to randomization-based ones. © 2011 Elsevier Inc. All rights reserved
A solution to the crucial problem of population degeneration in high-dimensional evolutionary optimization
Three popular evolutionary optimization algorithms are tested on high-dimensional benchmark functions. An important phenomenon responsible for many failures - population degeneration - is discovered. That is, through evolution, the population of searching particles degenerates into a subspace of the search space, and the global optimum is exclusive from the subspace. Subsequently, the search will tend to be confined to this subspace and eventually miss the global optimum. Principal components analysis (PCA) is introduced to discover population degeneration and to remedy its adverse effects. The experiment results reveal that an algorithm's efficacy and efficiency are closely related to the population degeneration phenomenon. Guidelines for improving evolutionary algorithms for high-dimensional global optimization are addressed. An application to highly nonlinear hydrological models demonstrates the efficacy of improved evolutionary algorithms in solving complex practical problems. © 2011 IEEE
Handling boundary constraints for particle swarm optimization in high-dimensional search space
Despite the fact that the popular particle swarm optimizer (PSO) is currently being extensively applied to many real-world problems that often have high-dimensional and complex fitness landscapes, the effects of boundary constraints on PSO have not attracted adequate attention in the literature. However, in accordance with the theoretical analysis in [11], our numerical experiments show that particles tend to fly outside of the boundary in the first few iterations at a very high probability in high-dimensional search spaces. Consequently, the method used to handle boundary violations is critical to the performance of PSO. In this study, we reveal that the widely used random and absorbing bound-handling schemes may paralyze PSO for high-dimensional and complex problems. We also explore in detail the distinct mechanisms responsible for the failures of these two bound-handling schemes. Finally, we suggest that using high-dimensional and complex benchmark functions, such as the composition functions in [19], is a prerequisite to identifying the potential problems in applying PSO to many real-world applications because certain properties of standard benchmark functions make problems inexplicit. © 2011 Elsevier Inc. All rights reserved
SOS-convex Semi-algebraic Programs and its Applications to Robust Optimization: A Tractable Class of Nonsmooth Convex Optimization
In this paper, we introduce a new class of nonsmooth convex functions called
SOS-convex semialgebraic functions extending the recently proposed notion of
SOS-convex polynomials. This class of nonsmooth convex functions covers many
common nonsmooth functions arising in the applications such as the Euclidean
norm, the maximum eigenvalue function and the least squares functions with
-regularization or elastic net regularization used in statistics and
compressed sensing. We show that, under commonly used strict feasibility
conditions, the optimal value and an optimal solution of SOS-convex
semi-algebraic programs can be found by solving a single semi-definite
programming problem (SDP). We achieve the results by using tools from
semi-algebraic geometry, convex-concave minimax theorem and a recently
established Jensen inequality type result for SOS-convex polynomials. As an
application, we outline how the derived results can be applied to show that
robust SOS-convex optimization problems under restricted spectrahedron data
uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP
relaxation result for restricted ellipsoidal data uncertainty and answers the
open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a
robust solution from the semi-definite programming relaxation in this broader
setting
- …