418,723 research outputs found

    Adaptive intelligence: essential aspects

    Get PDF
    The article discusses essential aspects of Adaptive Intelligence. Experimental results on optimisation of global test functions by Free Search, Differential Evolution, and Particle Swarm Optimisation clarify how these methods can adapt to multi-modal landscape and space dominated by sub-optimal regions, without supervisors’ control. The achieved results are compared and analysed

    Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains

    Get PDF
    In this paper, we consider comparison-based adaptive stochastic algorithms for solving numerical optimisation problems. We consider a specific subclass of algorithms that we call comparison-based step-size adaptive randomized search (CB-SARS), where the state variables at a given iteration are a vector of the search space and a positive parameter, the step-size, typically controlling the overall standard deviation of the underlying search distribution.We investigate the linear convergence of CB-SARS on\emph{scaling-invariant} objective functions. Scaling-invariantfunctions preserve the ordering of points with respect to their functionvalue when the points are scaled with the same positive parameter (thescaling is done w.r.t. a fixed reference point). This class offunctions includes norms composed with strictly increasing functions aswell as many non quasi-convex and non-continuousfunctions. On scaling-invariant functions, we show the existence of ahomogeneous Markov chain, as a consequence of natural invarianceproperties of CB-SARS (essentially scale-invariance and invariance tostrictly increasing transformation of the objective function). We thenderive sufficient conditions for \emph{global linear convergence} ofCB-SARS, expressed in terms of different stability conditions of thenormalised homogeneous Markov chain (irreducibility, positivity, Harrisrecurrence, geometric ergodicity) and thus define a general methodologyfor proving global linear convergence of CB-SARS algorithms onscaling-invariant functions. As a by-product we provide aconnexion between comparison-based adaptive stochasticalgorithms and Markov chain Monte Carlo algorithms.Comment: SIAM Journal on Optimization, Society for Industrial and Applied Mathematics, 201

    Blind joint maximum likelihood channel estimation and data detection for single-input multiple-output systems

    No full text
    A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of single-input multiple-output (SIMO) systems. The joint ML optimization of the channel and data estimation is decomposed into an iterative optimization loop. An efficient global optimization algorithm termed as the repeated weighted boosting aided search is employed first to identify the unknown SIMO channel model, and then the Viterbi algorithm is used for the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used for demonstrating the efficiency of this joint ML optimization scheme designed for blind adaptive SIMO systems

    An adaptive mutation operator for particle swarm optimization

    Get PDF
    Copyright @ 2008 MICParticle swarm optimization (PSO) is an effcient tool for optimization and search problems. However, it is easy to betrapped into local optima due to its in-formation sharing mechanism. Many research works have shown that mutation operators can help PSO prevent prema- ture convergence. In this paper, several mutation operators that are based on the global best particle are investigated and compared for PSO. An adaptive mutation operator is designed. Experimental results show that these mutation operators can greatly enhance the performance of PSO. The adaptive mutation operator shows great advantages over non-adaptive mutation operators on a set of benchmark test problems.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1

    Cover-Encodings of Fitness Landscapes

    Full text link
    The traditional way of tackling discrete optimization problems is by using local search on suitably defined cost or fitness landscapes. Such approaches are however limited by the slowing down that occurs when the local minima that are a feature of the typically rugged landscapes encountered arrest the progress of the search process. Another way of tackling optimization problems is by the use of heuristic approximations to estimate a global cost minimum. Here we present a combination of these two approaches by using cover-encoding maps which map processes from a larger search space to subsets of the original search space. The key idea is to construct cover-encoding maps with the help of suitable heuristics that single out near-optimal solutions and result in landscapes on the larger search space that no longer exhibit trapping local minima. We present cover-encoding maps for the problems of the traveling salesman, number partitioning, maximum matching and maximum clique; the practical feasibility of our method is demonstrated by simulations of adaptive walks on the corresponding encoded landscapes which find the global minima for these problems.Comment: 15 pages, 4 figure

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity
    corecore