270 research outputs found

    Evolutionary population dynamics and multi-objective optimisation problems

    Get PDF
    Griffith Sciences, School of Information and Communication TechnologyFull Tex

    Solving Competitive Traveling Salesman Problem Using Gray Wolf Optimization Algorithm

    Get PDF
    In this paper a Gray Wolf Optimization (GWO) algorithm is presented to solve the Competitive Traveling Salesman Problem (CTSP). In CTSP, there are numbers of non-cooperative salesmen their goal is visiting a larger possible number of cities with lowest cost and most gained benefit. Each salesman will get a benefit when he visits unvisited city before all other salesmen. Two approaches have been used in this paper, the first one called static approach, it is mean evenly divides the cities among salesmen. The second approach is called parallel at which all cities are available to all salesmen and each salesman tries to visit as much as possible of the unvisited cities. The algorithms are executed for 1000 times and the results prove that the GWO is very efficient giving an indication of the superiority of GWO in solving CTSP

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    Capso: A Multi-Objective Cultural Algorithm System To Predict Locations Of Ancient Sites

    Get PDF
    ABSTRACT CAPSO: A MULTI-OBJECTIVE CULTURAL ALGORITHM SYSTEM TO PREDICT LOCATIONS OF ANCIENT SITES by SAMUEL DUSTIN STANLEY August 2019 Advisor: Dr. Robert Reynolds Major: Computer Science Degree: Doctor of Philosophy The recent archaeological discovery by Dr. John O’Shea at University of Michigan of prehistoric caribou remains and Paleo-Indian structures underneath the Great Lakes has opened up an opportunity for Computer Scientists to develop dynamic systems modelling these ancient caribou routes and hunter-gatherer settlement systems as well as the prehistoric environments that they existed in. The Wayne State University Cultural Algorithm team has been interested assisting Dr. O’Shea’s archaeological team by predicting new structures in the Alpena-Amberley Ridge Region. To further this end, we developed a rule-based expert prediction system to work with our team’s dynamic model of the Paleolithic environment. In order to evolve the rules and thresholds within this expert system, we developed a Pareto-based multi-objective optimizer called CAPSO, which stands for Cultural Algorithm Particle Swarm Optimizer. CAPSO is fully parallelized and is able to work with modern multicore CPU architecture, which enables CAPSO to handle “big data” problems such as this one. The crux of our methodology is to set up a biobjective problem with the objectives being locations predicted by the expert system (minimize) vs. training set occupational structures within those predicted locations (maximize). The first of these quantities plays the role of “cost” while the second plays the role of “benefit”. Four separate such biobjective problems are created, one for each of the four relevant occupational structure types (hunting blinds, drive lines, caches, and logistical camps). For each of these problems, when CAPSO tunes the system’s rules and thresholds, it changes which locations are predicted and hence also which structures are flagged. By repeatedly tuning the rules and thresholds, CAPSO creates a Pareto Front of locations predicted vs. structures predicted for each of the four occupational structure types. Statistical analysis of these Pareto Fronts reveals that as the number of structures predicted (benefit) increases linearly, the number of locations predicted (cost) increases exponentially. This pattern is referred to in the dissertation as the Accelerating Cost Hypothesis (ACH). The ACH statistically holds for all four structure types, and is the result of imperfect information

    Hybrid feature selection of breast cancer gene expression microarray data based on metaheuristic methods: a comprehensive review

    Get PDF
    Breast cancer (BC) remains the most dominant cancer among women worldwide. Numerous BC gene expression microarray-based studies have been employed in cancer classification and prognosis. The availability of gene expression microarray data together with advanced classification methods has enabled accurate and precise classification. Nevertheless, the microarray datasets suffer from a large number of gene expression levels, limited sample size, and irrelevant features. Additionally, datasets are often asymmetrical, where the number of samples from different classes is not balanced. These limitations make it difficult to determine the actual features that contribute to the existence of cancer classification in the gene expression profiles. Various accurate feature selection methods exist, and they are being widely applied. The objective of feature selection is to search for a relevant, discriminant feature subset from the basic feature space. In this review, we aim to compile and review the latest hybrid feature selection methods based on bio-inspired metaheuristic methods and wrapper methods for the classification of BC and other types of cancer

    Chaotic Sand Cat Swarm Optimization

    Get PDF
    In this study, a new hybrid metaheuristic algorithm named Chaotic Sand Cat Swarm Optimization (CSCSO) is proposed for constrained and complex optimization problems. This algorithm combines the features of the recently introduced SCSO with the concept of chaos. The basic aim of the proposed algorithm is to integrate the chaos feature of non-recurring locations into SCSO’s core search process to improve global search performance and convergence behavior. Thus, randomness in SCSO can be replaced by a chaotic map due to similar randomness features with better statistical and dynamic properties. In addition to these advantages, low search consistency, local optimum trap, inefficiency search, and low population diversity issues are also provided. In the proposed CSCSO, several chaotic maps are implemented for more efficient behavior in the exploration and exploitation phases. Experiments are conducted on a wide variety of well-known test functions to increase the reliability of the results, as well as real-world problems. In this study, the proposed algorithm was applied to a total of 39 functions and multidisciplinary problems. It found 76.3% better responses compared to a best-developed SCSO variant and other chaotic-based metaheuristics tested. This extensive experiment indicates that the CSCSO algorithm excels in providing acceptable results

    Multi-objective optimisation using learning automata and its applications in power systems

    Get PDF
    Learning automata are a major branch of machine learning designed to find the optimal action to a learning task in a random environment. Interactions with environment and repetitive learning of a number of individual units, which are independent and structurally simple, enable the learning automata to tackle complex learning problems. Systems built with learning automata have been successfully employed in many difficult learning situations over the years. They have also been investigated in solving optimisation problems. However, the performance of the learning automata in solving complex optimisation problems, such as high-dimensional optimisation problems and multi-objective optimisation problems, has not been fully investigated. Therefore, this thesis is devoted to exploring the potential of learning automata in solving complex optimisation problems. In the thesis, Function Optimisation by Learning Automata (FOLA) and Multi-objective Optimisation by Learning Automata (MOLA) have been developed for single and multi-objective complex optimisation problems respectively. In FOLA, the search domain of a complex optimisation problem is divided into cells and represented by cell values. Each automaton of FOLA conducts dimensional search actions according to the path values which are calculated based on the cell values situated on the searching path. During the optimisation process, cell values are continuously updated using the values of the automata states, and stored in memory. In this way, the information obtained prior to the current state can be collected and efficiently used. With these approaches, FOLA is able to undertake search in continuous states and achieve accurate solutions efficiently. To fully analyse the performance of FOLA, it has been tested based on twenty-two benchmark functions, which represent a wide range of challenging optimisation problems. FOLA has been compared with ten Evolutionary Algorithms (EAs), which are widely used for solving complex optimisation problems nowadays, and four newly-proposed EAs which have been reported to solve the same benchmark functions promisingly in literature. The experimental results have demonstrated the superiority of FOLA over the other EAs for most benchmark functions, in terms of the convergence rate and accuracy of finding optimal solutions. FOLA has shown its capability to solve high-dimensional multi-modal problems. The experiment also shows that FOLA is able to greatly reduce computation time, especially for high-dimensional functions. Most optimisation problems existing in the real world have more than one objective. These problems aim to find evenly distributed Pareto fronts which are the plots of the objective function values of the optimal solutions. They can be tackled by combining the multiple objectives into one single objective function that can be solved by a single-objective optimisation algorithm. However, this method suffers from the drawback of large computation load, and has difficulty in finding non-convex Pareto fronts. Therefore, it is important to develop alternative optimisers that can be used for complex multi-objective problems. Based on FOLA, MOLA is proposed to solve complex multi-objective optimisation problems. MOLA mainly comprises two processes: the process of searching and the process of learning from neighborhood. The process of searching is carried out through a tournament that is held between Pareto global search and Pareto local search. This tournament can lead to a better trade-off between exploitation and exploration, which is a critical factor in finding the optimal solution. In the process of learning, the relationship of neighborhood among the non-dominated solutions is investigated, as it is believed that useful information that can benefit the search is embeded in neighborhood. Based on the relationship, non-dominated solutions are updated based on their neighbors. Through these processes, MOLA is able to find evenly distributed Pareto fronts for complex optimisation problems. MOLA has been compared with two popular weighted-sum based algorithms, Multi-Objective Genetic Algorithm (MOGA) and Multi-Objective Particle Swarm Optimiser (MOPSO), on four multi-objective benchmark functions that comprise low and high-dimensional models, convex and non-convex models, and continuous and discontinuous models respectively. Besides, MOLA has been also compared with the latest developments of Pareto front-based multi-objective algorithms, Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and Non-dominated Sorting Genetic Algorithm II (NSGA-II), on the basis of thirteen widely used multi-objective functions, which comprise complex Pareto set shapes. The simulation results have shown that MOLA greatly exhibits its superiority over the other algorithms, as it can find accurate and evenly distributed non-dominated solutions, and its Pareto fronts are wider than those obtained by the other algorithms. Besides, MOLA consumes less computation time, whilst finding more accurate non-dominated solutions. In the thesis, the application of FOLA and MOLA in solving optimal power flow problems of power systems has been investigated. Optimal power flow problems are very important in power system operation and planning, especially economic power system dispatch and voltage stability enhancement problems, which have attracted more and more attention around the world. FOLA has also been applied to solve the power flow problems which concern with fuel cost minimisation, voltage profile improvement and voltage stability enhancement, based on the IEEE 30-bus and IEEE 57-bus systems. FOLA is fully compared with improved Particle Swarm Optimisation (PSO) and Genetic Algorithm (GA). The simulation results have demonstrated that FOLA is able to offer more accurate solutions with shorter computation times, in comparison with the improved PSO and GA, particularly on the IEEE 57-bus system. FOLA is also applied to solve the optimal power flow problems in the power systems where the operation condition varies for a short period time. Although the varying operation condition is considered here, these problems are considered as static problems in a short period of time. In this case, the fluctuating power output will affect the power flow calculation, and it can cause instability which results in severe detriments in the power systems. In this case, an algorithm which can provide security to the power systems is highly demanded. Simulation studies have been carried out among FOLA, the improved PSO and GA, based on the modified IEEE 30-bus and 57-bus systems, which are embedded with time-varying power outputs. The simulation results have demonstrated that FOLA is able to track the changes of the power system configuration more rapidly and accurately than the improved PSO and GA, particularly when voltage stability is involved in the objective function. Besides, FOLA is able to offer more accurate solutions with shorter computation time, in comparison with PSO and GA. FOLA is also compared with two recently-proposed EAs, Comprehensive Learning Particle Swarm Optimiser (CLPSO) and Cooperative Particle Swarm Optimisation (CPSO), based on the IEEE 118-bus system. Advantages of FOLA have been demonstrated by the fact that FOLA reduces the fuel cost greatly and enhances the voltage stability of the power system. Nowadays, wind power is expected to be largely increased in power systems, due to its inexhaustible and nonpolluting merits. However, it brings new challenges to power system operation when wind power is connected to the grid of power systems. The study is undertaken on the modified IEEE 30-bus power system and new England test power system, which are incorporated with fixed-speed and variable-speed wind generators respectively. MOLA has been fully compared with MOEA/D and NSGA-II in solving the multi-objective optimisation problem, which aims to reduce the operational cost and enhance voltage stability simultaneously. The simulation results have demonstrated that MOLA performs better than MOEA/D and NSGA-II, as MOLA can find wider and evenly distributed Pareto fronts, and obtain more accurate Pareto optimal solutions efficiently. Additionally, MOLA consistently finds larger hypervolume and smaller diversity metric than MOEA/D and NSGA-II under different circumstances. MOLA has presented its superiority by finding wider Pareto fronts than MOEA/D and obtaining more accurate solutions than NSGA-II, while using much less function evaluations. MOLA has also been applied to solve the multi-objective optimisation problem in deregulated market, which aims to maximise the social benefit and enhance voltage stability in the IEEE 30-bus power system. MOLA greatly increases the social benefit and improves the voltage stability. It can find wide and evenly distributed Pareto fronts, and obtain accurate Pareto optimal solutions efficiently
    • …
    corecore