16 research outputs found

    Automatic Multi-Objective Clustering Algorithm Using Hybrid Particle Swarm Optimization With Simulated Annealing.

    Get PDF
    Pengelompokan adalah suatu teknik pelombongan data. Di dalam bidang set data tanpa selia, tugas mengelompok ialah dengan mengumpul set data kepada kelompok yang bermakna. Pengelompokan digunakan sebagai teknik penyelesaian di dalam pelbagai bidang dengan membahagikan dan mengstruktur semula data yang besar dan kompleks supaya menjadi lebih bererti justru mengubahnya kepada maklumat yang berguna. Clustering is a data mining technique. In the field of unsupervised datasets, the task of clustering is by grouping the dataset into meaningful clusters. Clustering is used as a data solution technique in various fields to divide and restructure the large and complex data to become more significant thus transform them into useful information

    Solving Task Scheduling Problem in Cloud Computing Environment Using Orthogonal Taguchi-Cat Algorithm

    Get PDF
    In cloud computing datacenter, task execution delay is no longer accidental. In recent times, a number of artificial intelligence scheduling techniques are proposed and applied to reduce task execution delay. In this study, we proposed an algorithm called Orthogonal Taguchi Based-Cat Swarm Optimization (OTB-CSO) to minimize total task execution time. In our proposed algorithm Taguchi Orthogonal approach was incorporated at CSO tracing mode for best task mapping on VMs with minimum execution time. The proposed algorithm was implemented on CloudSim tool and evaluated based on makespan metric. Experimental results showed for 20VMs used, proposed OTB-CSO was able to minimize makespan of total tasks scheduled across VMs with 42.86%, 34.57% and 2.58% improvement over Minimum and Maximum Job First (Min-Max), Particle Swarm Optimization with Linear Descending Inertia Weight (PSO-LDIW) and Hybrid Particle Swarm Optimization with Simulated Annealing (HPSO-SA) algorithms. Results obtained showed OTB-CSO is effective to optimize task scheduling and improve overall cloud computing performance with better system utilization

    PMT : opposition based learning technique for enhancing metaheuristic algorithms performance

    Get PDF
    Metaheuristic algorithms have shown promising performance in solving sophisticated real-world optimization problems. Nevertheless, many metaheuristic algorithms are still suffering from a low convergence rate because of the poor balance between exploration (i.e. roaming new potential search areas) and exploitation (i.e., exploiting the existing neighbors). In some complex problems, the convergence rate can still be poor owing to becoming trapped in local optima. Opposition-based learning (OBL) has shown promising results to address the aforementioned issue. Nonetheless, OBL-based solutions often consider one particular direction of the opposition. Considering only one direction can be problematic as the best solution may come in any of a multitude of directions. Addressing these OBL limitations, this research proposes a new general OBL technique inspired by a natural phenomenon of parallel mirrors systems called the Parallel Mirrors Technique (PMT). Like existing OBL-based approaches, the PMT generates new potential solutions based on the currently selected candidate. Unlike existing OBL-based techniques, the PMT generates more than one candidate in multiple solution-space directions. To evaluate the PMT’s performance and adaptability, the PMT was applied to four contemporary metaheuristic algorithms, Differential Evolution, Particle Swarm Optimization, Simulated Annealing, and Whale Optimization Algorithm, to solve 15 well-known benchmark functions as well as 2 real world problems based on the welded beam design and pressure vessel design. Experimentally, the PMT shows promising results by accelerating the convergence rate against the original algorithms with the same number of fitness evaluations comparing to the original metaheuristic algorithms in benchmark functions and real-world optimization problems

    Optimal distributed generation and load shedding scheme using artificial bee colony- hill climbing algorithm considering voltage stability and losses indices

    Get PDF
    Around the world, the demand is increasing due to industrial activity and advances in both developing and developed countries. This situation has pushed many power system operators to operate their system closer to the voltage stability limits. Increase in power consumption can cause serious problems in electric power systems, such as voltage instability, frequency instability, line overloading, and power system blackouts.Voltage stability index (VSI) is a tool for detecting voltage stability related problems. This work proposes an index of the line voltage stability limits based on Thevenin’s Theorem, which is referred to as the Maximum Line Stability Index (MLSI). The function of MLSI is to estimate the voltage stability condition and determine sensitive lines in power system. To increase voltage stability and improve other aspects of power quality, many power system operators are considering the idea of integrating distributed energy resources into the existing power system. Another part of this work focuses on enhancing the stability of the power system using distributed generator (DG). The proposed solution is based on the optimization method developed from a combination of the Artificial Bee Colony and Hill Climbing algorithms (ABC-HC) to give the optimal placement and sizing of DG units to be deployed in the system. Under severe contingency conditions, such as increase in demand and loss of transmission lines, frequently the problem cannot be solved by just using the DG, the possible solution is to consider load shedding as to reduce the congestion in order to maintain voltage stability in the system. To solve this problem, an optimal load shedding approach, integrated with optimal DG sizing is proposed using the ABC-HC algorithm. This technique can find the load location to be shed, as well as the size of DG. The performance and effectiveness of each proposed solution was tested on IEEE test systems. The simulation results showed that the MLSI index has strong sensitivity to detect the overloaded line in the system and as reliable as other voltage stability indices. Meanwhile, the proposed ABC-HC optimization technique shows its ability to identify the bus location and the optimal active energy injection from the DG with a substantial power loss reduction. Finally, under severe contingency condition, the optimization of DGs and load shedding shows the system able to maintain its voltage stability

    An Improved particle swarm optimization based on lévy flight and simulated annealing for high dimensional optimization problem

    Get PDF
    Particle swarm optimization (PSO) is a simple metaheuristic method to implement with robust performance. PSO is regarded as one of the numerous researchers' most well-studied algorithms. However, two of its most fundamental problems remain unresolved. PSO converges onto the local optimum for high-dimensional optimization problems, and it has slow convergence speeds. This paper introduces a new variant of a particle swarm optimization algorithm utilizing Lévy flight-McCulloch, and fast simulated annealing (PSOLFS). The proposed algorithm uses two strategies to address high-dimensional problems: hybrid PSO to define the global search area and fast simulated annealing to refine the visited search region. In this paper, PSOLFS is designed based on a balance between exploration and exploitation. We evaluated the algorithm on 16 benchmark functions for 500 and 1,000 dimension experiments. On 500 dimensions, the algorithm obtains the optimal value on 14 out of 16 functions. On 1,000 dimensions, the algorithm obtains the optimal value on eight benchmark functions and is close to optimal on four others. We also compared PSOLFS with another five PSO variants regarding convergence accuracy and speed. The results demonstrated higher accuracy and faster convergence speed than other PSO variants. Moreover, the results of the Wilcoxon test show a significant difference between PSOLFS and the other PSO variants. Our experiments' findings show that the proposed method enhances the standard PSO by avoiding the local optimum and improving the convergence speed

    Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer components.

    Get PDF
    The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments
    corecore