13 research outputs found

    Towards a Better Understanding of the Local Attractor in Particle Swarm Optimization: Speed and Solution Quality

    Full text link
    Particle Swarm Optimization (PSO) is a popular nature-inspired meta-heuristic for solving continuous optimization problems. Although this technique is widely used, the understanding of the mechanisms that make swarms so successful is still limited. We present the first substantial experimental investigation of the influence of the local attractor on the quality of exploration and exploitation. We compare in detail classical PSO with the social-only variant where local attractors are ignored. To measure the exploration capabilities, we determine how frequently both variants return results in the neighborhood of the global optimum. We measure the quality of exploitation by considering only function values from runs that reached a search point sufficiently close to the global optimum and then comparing in how many digits such values still deviate from the global minimum value. It turns out that the local attractor significantly improves the exploration, but sometimes reduces the quality of the exploitation. As a compromise, we propose and evaluate a hybrid PSO which switches off its local attractors at a certain point in time. The effects mentioned can also be observed by measuring the potential of the swarm

    Searching for promisingly trained artificial neural networks

    Get PDF
    Assessing the training process of artificial neural networks (ANNs) is vital for enhancing their performance and broadening their applicability. This paper employs the Monte Carlo simulation (MCS) technique, integrated with a stopping criterion, to construct the probability distribution of the learning error of an ANN designed for short-term forecasting. The training and validation processes were conducted multiple times, each time considering a unique random starting point, and the subsequent forecasting error was calculated one step ahead. From this, we ascertained the probability of having obtained all the local optima. Our extensive computational analysis involved training a shallow feedforward neural network (FFNN) using wind power and load demand data from the transmission systems of the Netherlands and Germany. Furthermore, the analysis was expanded to include wind speed prediction using a long short-term memory (LSTM) network at a site in Spain. The improvement gained from the FFNN, which has a high probability of being the global optimum, ranges from 0.7% to 8.6%, depending on the forecasting variable. This solution outperforms the persistent model by between 5.5% and 20.3%. For wind speed predictions using an LSTM, the improvement over an average-trained network stands at 9.5%, and is 6% superior to the persistent approach. These outcomes suggest that the advantages of exhaustive search vary based on the problem being analyzed and the type of network in use. The MCS method we implemented, which estimates the probability of identifying all local optima, can act as a foundational step for other techniques like Bayesian model selection, which assumes that the global optimum is encompassed within the available hypotheses

    Particle swarm optimization almost surely finds local optima

    No full text

    Midrange exploration exploitation searching particle swarm optimization with HSV-template matching for crowded environment object tracking

    Get PDF
    Particle Swarm Optimization (PSO) has demonstrated its effectiveness in solving the optimization problems. Nevertheless, the PSO algorithm still has the limitation in finding the optimum solution. This is due to the lack of exploration and exploitation of the particle throughout the search space. This problem may also cause the premature convergence, the inability to escape the local optima, and has a lack of self-adaptation in their performance. Therefore, a new variant of PSO called Midrange Exploration Exploitation Searching Particle Swarm Optimization (MEESPSO) was proposed to overcome these drawbacks. In this algorithm, the worst particle will be relocating to a new position to ensure the concept of exploration and exploitation remains in the search space. This is the way to avoid the particles from being trapped in local optima and exploit in a suboptimal solution. The concept of exploration will continue when the particle is relocated to a new position. In addition, to evaluate the performance of MEESPSO, we conducted the experiment on 12 benchmark functions. Meanwhile, for the dynamic environment, the method of MEESPSO with Hue, Saturation, Value (HSV)-template matching was proposed to improve the accuracy and precision of object tracking. Based on 12 benchmarks functions, the result shows a slightly better performance in term of convergence, consistency and error rate compared to another algorithm. The experiment for object tracking was conducted in the PETS09 and MOT20 datasets in a crowded environment with occlusion, similar appearance, and deformation challenges. The result demonstrated that the tracking performance of the proposed method was increased by more than 4.67% and 15% in accuracy and precision compared to other reported works

    Weak convergence of particle swarm optimization

    Get PDF
    Particle swarm optimization algorithm is a stochastic meta-heuristic solving global optimization problems appreciated for its efficacity and simplicity. It consists in a swarm of particles interacting among themselves and searching the global optimum. The trajectory of the particles has been well-studied in a deterministic case and more recently in a stochastic context. Assuming the convergence of PSO, we proposed here two CLT for the particles corresponding to two kinds of convergence behavior. These results can lead to build confidence intervals around the local minimum found by the swarm or to the evaluation of the risk. A simulation study confirms these properties

    Exact Markov Chain-based Runtime Analysis of a Discrete Particle Swarm Optimization Algorithm on Sorting and OneMax

    Full text link
    Meta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain "easy" reference problems in a black-box setting, namely the sorting problem and the problem OneMAX. In our analysis we use a Markov-model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov-model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration
    corecore