85,020 research outputs found
An Improved Robot Path Planning Algorithm
Robot path planning is a NP problem. Traditionaloptimization methods are not very effective to solve it. Traditional genetic algorithm trapped into the local minimum easily. Therefore, based on a simple genetic algorithm and combine the base ideology of orthogonal design method then applied it to the population initialization, using the intergenerational elite mechanism, as well as the introduction of adaptive local search operator to prevent trapped into the local minimum and improvethe convergence speed to form a new genetic algorithm. Through the series of numerical experiments, the new algorithm has been proved to be efficiency.We also use the proposed algorithm to solve the robot path planning problem and the experiment results indicated that the new algorithm is efficiency for solving the robot path planning problems and the best path usually can be found
Orthogonal methods based ant colony search for solving continuous optimization problems
Research into ant colony algorithms for solving continuous optimization problems forms one of the most
significant and promising areas in swarm computation. Although traditional ant algorithms are designed for combinatorial
optimization, they have shown great potential in solving a wide range of optimization problems, including continuous
optimization. Aimed at solving continuous problems effectively, this paper develops a novel ant algorithm termed "continuous orthogonal ant colony" (COAC), whose pheromone deposit mechanisms would enable ants to search for
solutions collaboratively and effectively. By using the orthogonal design method, ants in the feasible domain can explore
their chosen regions rapidly and e±ciently. By implementing an "adaptive regional radius" method, the proposed
algorithm can reduce the probability of being trapped in local optima and therefore enhance the global search capability and accuracy. An elitist strategy is also employed to reserve the most valuable points. The performance of the COAC is
compared with two other ant algorithms for continuous optimization of API and CACO by testing seventeen functions
in the continuous domain. The results demonstrate that the proposed COAC algorithm outperforms the others
Continuous function optimization using hybrid ant colony approach with orthogonal design scheme
A hybrid Orthogonal Scheme Ant Colony Optimization (OSACO) algorithm for continuous function optimization (CFO) is presented in this paper. The methodology integrates the advantages of Ant Colony Optimization (ACO) and Orthogonal Design Scheme (ODS). OSACO is based on the following principles: a) each independent variable space (IVS) of CFO is dispersed into a number of random and movable nodes; b) the carriers of pheromone of ACO are shifted to the nodes; c) solution path can be obtained by choosing one appropriate node from each IVS by ant; d) with the ODS, the best solved path is further improved. The proposed algorithm has been successfully applied to 10 benchmark test functions. The performance and a comparison with CACO and FEP have been studied
Recommended from our members
Searching for improvement
Engineering design can be thought of as a search for the best solutions to engineering problems. To perform an effective search, one must distinguish between competing designs and establish a measure of design quality, or fitness. To compare different designs, their features must be adequately described in a well-defined framework, which can mean separating the creative and analytical parts of the design process. By this we mean that a distinction is drawn between coming up with novel design concepts, or architectures, and the process of detailing or refining existing design architecture. In the case of a given design architecture, one can consider the set of all possible designs that could be created by varying its features. If it were possible to measure the fitness of all designs in this set, then one could identify a fitness landscape and search for the best possible solution for this design architecture. In this Chapter, the significance of the interactions between design features in defining the metaphorical fitness landscape is described. This highlights that the efficiency of a search algorithm is inextricably linked to the problem structure (and hence the landscape). Two approaches, namely, Genetic Algorithms (GA) and Robust Engineering Design (RED) are considered in some detail with reference to a case study on improving the design of cardiovascular stents
Orthogonal learning particle swarm optimization
Particle swarm optimization (PSO) relies on its
learning strategy to guide its search direction. Traditionally,
each particle utilizes its historical best experience and its neighborhood’s
best experience through linear summation. Such a
learning strategy is easy to use, but is inefficient when searching
in complex problem spaces. Hence, designing learning strategies
that can utilize previous search information (experience) more
efficiently has become one of the most salient and active PSO
research topics. In this paper, we proposes an orthogonal learning
(OL) strategy for PSO to discover more useful information that
lies in the above two experiences via orthogonal experimental
design. We name this PSO as orthogonal learning particle swarm
optimization (OLPSO). The OL strategy can guide particles to
fly in better directions by constructing a much promising and
efficient exemplar. The OL strategy can be applied to PSO with
any topological structure. In this paper, it is applied to both global
and local versions of PSO, yielding the OLPSO-G and OLPSOL
algorithms, respectively. This new learning strategy and the
new algorithms are tested on a set of 16 benchmark functions, and
are compared with other PSO algorithms and some state of the
art evolutionary algorithms. The experimental results illustrate
the effectiveness and efficiency of the proposed learning strategy
and algorithms. The comparisons show that OLPSO significantly
improves the performance of PSO, offering faster global convergence,
higher solution quality, and stronger robustness
Identification of Nonlinear Parameter-Dependent Common-Structured models to accommodate varying experimental conditions and design parameter properties
This study considers the identification problem for a class of nonlinear parameter-varying systems associated with the following scenario: the system behaviour depends on some specifically prescribed parameter properties, which are adjustable. To understand the effect of the varying parameters, several different experiments, corresponding to different parameter properties, are carried out and different data sets are collected. The objective is to find, from the available data sets, a common parameter-dependent model structure that best fits the adjustable parameter properties for the underlying system. An efficient common model structure selection (CMSS) algorithm, called the extended forward orthogonal regression (EFOR) algorithm, is proposed to select such a common model structure. Several examples are presented to illustrate the application and the effectiveness of the new identification approach
Probability density estimation with tunable kernels using orthogonal forward regression
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately
Constructing an overall dynamical model for a system with changing design parameter properties
This study considers the identification problem for a class of non-linear parameter-varying systems associated with the following scenario: the system behaviour depends on some specifically prescribed parameter properties, which are adjustable. To understand the effect of the varying parameters, several different experiments, corresponding to different parameter properties, are carried out and different data sets are collected. The objective is to find, from the available data sets, a common parameter-dependent model structure that best fits the adjustable parameter properties for the underlying system. An efficient Common Model Structure Selection (CMSS) algorithm, called the Extended Forward Orthogonal Regression (EFOR) algorithm, is proposed to select such a common model structure. Two examples are presented to illustrate the application and the effectiveness of the new identification approach
- …