22 research outputs found

    Desain Umpan Balik Keadaan Menggunakan Algoritma Particle Swarm Optimization Dan Differential Evolutionalgorithm Studi Kasus Gerak Lateral Pesawat F-16

    Full text link
    The purpose of Linear Quadratic Regulator (LQR) optimal control system is to stabilize the system, so that the output of the system towards a steady state by minimizing the performance index. LQR-invinite horizon is a special case of LQR in thecontinuous time area where the terminal time of the performance index value for infinite time and infinite outputsystem is zero. Performance index will be affected by the weighting matrix. In this paper will be discussed about the application of Particle Swarm Optimization algorithm (PSO) and Differential Evolution Algorithm (DEA) to determine the state feedback of a closed loop system and weighting matrices in the LQR to minimize performance index. PSO algorithm is a computational algorithm inspired by social behavior of flocks of birds and fishes in searching of food. While the DEA is an optimization algorithm that is adopted from evolution and genetics of organisms. Simulations of the PSO algorithm will be compared with DEA. Based on case study, DEA is faster then PSO to get convergence to the optimum solution

    Finding the optimal background subtraction algorithm for EuroHockey 2015 video

    Get PDF
    Background subtraction is a classic step in a vision-based localization and tracking workflow. Previous studies have compared background subtraction algorithms on publicly available datasets; however comparisons were made only with manually optimized parameters. The aim of this research was to identify the optimal background subtraction algorithm for a set of field hockey videos captured at EuroHockey 2015. Particle Swarm Optimization was applied to find the optimal background subtraction algorithm. The objective function was the F-score, i.e. the harmonic mean of precision and recall. The precision and recall were calculated using the output of the background subtraction algorithm and gold standard labeled images. The training dataset consisted of 15 x 13 second field hockey video segments. The test data consisted of 5 x 13 second field hockey video segments. The video segments were chosen to be representative of the teams present at the tournament, the times of day the matches were played and the weather conditions experienced. Each segment was 960 pixels x 540 pixels and had 10 ground truth labeled frames. Eight commonly used background subtraction algorithms were considered. Results suggest that a background subtraction algorithm must use optimized parameters for a valid comparison of performance. Particle Swarm Optimization is an appropriate method to undertake this optimization. The optimal algorithm, Temporal Median, achieved an F-score of 0.791 on the test dataset, suggesting it generalizes to the rest of the video footage captured at EuroHockey 2015

    A Time-Critical Investigation of Parameter Tuning in Differential Evolution for Non-Linear Global Optimization

    Get PDF
    Parameter searching is one of the most important aspects in getting favorable results in optimization problems. It is even more important if the optimization problems are limited by time constraints. In a limited time constraint problems, it is crucial for any algorithms to get the best results or near-optimum results. In a previous study, Differential Evolution (DE) has been found as one of the best performing algorithms under time constraints. As this has help in answering which algorithm that yields results that are near-optimum under a limited time constraint. Hence to further enhance the performance of DE under time constraint evaluation, a throughout parameter searching for population size, mutation constant and f constant have been carried out. CEC 2015 Global Optimization Competitionโ€™s 15 scalable test problems are used as test suite for this study. In the previous study the same test suits has been used and the results from DE will be use as the benchmark for this study since it shows the best results among the previous tested algorithms. Eight different populations size are used and they are 10, 30, 50, 100, 150, 200, 300, and 500. Each of these populations size will run with mutation constant of 0.1 until 0.9 and from 0.1 until 0.9. It was found that population size 100, Cr = 0.9, F=0.5 outperform the benchmark results. It is also observed from the results that good higher Cr around 0.8 and 0.9 with low F around 0.3 to 0.4 yields good results for DE under time constraints evaluatio

    Statistical Methods for Convergence Detection of Multi-Objective Evolutionary Algorithms

    Get PDF
    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation duality at the same time.Article / Letter to editorLeiden Inst. Advanced Computer Science

    Particle Swarm Optimization for Energy Disaggregation in Industrial and Commercial Buildings

    Full text link
    This paper provides a formalization of the energy disaggregation problem for particle swarm optimization and shows the successful application of particle swarm optimization for disaggregation in a multi-tenant commercial building. The developed mathmatical description of the disaggregation problem using a state changes matrix belongs to the group of non-event based methods for energy disaggregation. This work includes the development of an objective function in the power domain and the description of position and velocity of each particle in a high dimensional state space. For the particle swarm optimization, four adaptions have been applied to improve the results of disaggregation, increase the robustness of the optimizer regarding local optima and reduce the computational time. The adaptions are varying movement constants, shaking of particles, framing and an early stopping criterion. In this work we use two unlabelled power datasets with a granularity of 1 s. Therefore, the results are validated in the power domain in which good results regarding multiple error measures like root mean squared error or the percentage energy error can be shown.Comment: 10 pages, 13 figures, 3 table

    Statistical Methods for Convergence Detection of Multi-Objective Evolutionary Algorithms

    Get PDF
    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation duality at the same time.FWN โ€“ Publicaties zonder aanstelling Universiteit Leide

    High Order Contingency Selection using Particle Swarm Optimization and Tabu Search

    Get PDF
    There is a growing interest in investigating the high order contingency events that may result in large blackouts, which have been a great concern for power grid secure operation. The actual number of high order contingency is too huge for operators and planner to apply a brute-force enumerative analysis. This thesis presents a heuristic searching method based on particle swarm optimization (PSO) and tabu search to select severe high order contingencies. The original PSO algorithm gives an intelligent strategy to search the feasible solution space, but tends to find the best solution only. The proposed method combines the original PSO with tabu search such that a number of top candidates will be identified. This fits the need of high order contingency screening, which can be eventually the input to many other more complicate security analyses. Reordering of branches of test system based on severity of N-1 contingencies is applied as a pre-processing to increase the convergence properties and efficiency of the algorithm. With this reordering approach, many critical high order contingencies are located in a small area in the whole searching space. Therefore, the proposed algorithm tends to concentrate in searching this area such that the number of critical branch combinations searched will increase. Therefore, the speedup ratio is found to increase significantly. The proposed algorithm is tested for N-2 and N-3 contingencies using two test systems modified from the IEEE 118-bus and 30-bus systems. Variation of inertia weight, learning factors, and number of particles is tested and the range of values more suitable for this specific algorithm is suggested. Although illustrated and tested with N-2 and N-3 contingency analysis, the proposed algorithm can be extended to even higher order contingencies but visualization will be difficult because of the increase in the problem dimensions corresponding to the order of contingencies

    ๋ผํ”Œ๋ผ์Šค-ํ‘ธ๋ฆฌ์— ์˜์—ญ ์—ญ์‚ฐ๊ธฐ๋ฒ•์„ ์ด์šฉํ•œ ์œก์ƒํƒ์‚ฌ์ž๋ฃŒ์— ๋Œ€ํ•œ ์†๋„ ๋ชจ๋ธ ๊ตฌ์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ํ˜‘๋™๊ณผ์ • ๊ณ„์‚ฐ๊ณผํ•™์ „๊ณต, 2014. 2. ์‹ ์ฐฝ์ˆ˜.Currently, brilliant advances in the acquisition offer the possibility of solving the problem of the absence of low-frequency components that hinders the full-waveform inversion, yet, most real datasets do not contain these components. Thus, the long-wavelength velocity model that can be obtained using the Laplace- or Laplace-Fourier-domain inversion should be conducive to delineating the subsurface structure via migration or Fourier-domain inversion starting from this algorithm. In this thesis, the 2D elastic Laplace-Fourier inversion algorithm was developed for the application to a land dataset could recover the long-wavelength velocity models. This velocity-estimation algorithm adopts the finite element method on an unstructured grid with expectation of mitigating the high nonlinearity observed in datasets that originate from topography via accurate depiction of an irregular surface. For the inversion methodology, the novel pseudo-Hessian matrix is suggested in this thesis. This modified pseudo-Hessian matrix allows for a deeper penetration depth of the inverted result and promises a more convergent result regardless of damping factor that generally required for pseudo-Hessian matrix. Also, the normalized stopping criterion was introduced using multi-objective assumption based on the property of the logarithmic objective function, the natural separation of the phase and amplitude error, to ensure that the phase and amplitude information contribute to the inversion result with parity. This method could help to prevent the result of an acquiring of an over- or under-inverted result caused by over-fitting or an unsuitable determination of the number of inversion iterations. The developed inverse algorithm was tested using a time domain synthetic dataset generated with a realistic foothill model. The results of the test demonstrate that this algorithm can recover an adequate velocity model without requiring low-frequency information and with the dataset containing an expected noise.Chapter 1. Introduction 1 Chapter 2. Theory 6 2.1 The elastic wavefield in the Laplace and Laplace-Fourier domains 6 2.2 The elastic wave equation in the Laplace-Fourier domain 13 2.3 Simulation of the elastic wave propagation using the FEM 15 2.3.1 The finite element method for the 2D elastic wave equation 16 2.3.2 Source and receiver distributions 27 2.4 Full waveform inversion in the Laplace-Fourier domain 31 2.4.1 Determination of gradient direction in the Laplace-Fourier domain using the steepest descent 32 2.4.2 Preconditioning of the gradient direction using pseudo-Hessian matrix 35 2.4.3 Source-estimation algorithm 51 2.4.4 Construction of the mesh 54 2.4.5 Stopping criterion using normalized error for the Laplace-Fourier-domain inversion 56 Chapter 3. Examples using synthetic data 81 3.1 Laplace-Fourier-domain synthetic dataset 84 3.2 Time-domain synthetic dataset 88 3.2.1 Inversion test for the dependency with respect to the low-frequency information 93 3.2.2 Inversion test with a noisy dataset 111 3.2.3 Acoustic approach for an elastic dataset 123 Chapter 4. Conclusion 129 A.1 The notations 135 A.2 The IPDG formulation of the 2D elastic wave equation 136 REFERENCES 144 ์ดˆ ๋ก 152Docto
    corecore