350 research outputs found

    The influence of population size in geometric semantic GP

    Get PDF
    In this work, we study the influence of the population size on the learning ability of Geometric Semantic Genetic Programming for the task of symbolic regression. A large set of experiments, considering different population size values on different regression problems, has been performed. Results show that, on real-life problems, having small populations results in a better training fitness with respect to the use of large populations after the same number of fitness evaluations. However, performance on the test instances varies among the different problems: in datasets with a high number of features, models obtained with large populations present a better performance on unseen data, while in datasets characterized by a relative small number of variables a better generalization ability is achieved by using small population size values. When synthetic problems are taken into account, large population size values represent the best option for achieving good quality solutions on both training and test instances

    The application of software visualization technology to evolutionary computation: a case study in Genetic Algorithms

    Get PDF
    Evolutionary computation is an area within the field of artificial intelligence that is founded upon the principles of biological evolution. Evolution can be defined as the process of gradual development. Evolutionary algorithms are typically applied as a generic problem solving method, searching a problem space in order to locate good solutions. These solutions are found through an iterative evolutionary search that progresses by means of gradual developments. In the majority of cases of evolutionary computation the user is not aware of their algorithm's search behaviour. This causes two problems. First, the user has no way of assuring the quality of any solutions found other than to compare the solutions found by the algorithm with any available benchmark solutions or to re-run the algorithm and check if the results can be repeated or improved upon. Second, because the user is unaware of the algorithm's behaviour they have no way of identifying the contribution of the different components of the algorithm and therefore, no direct way of analyzing the algorithm's design and assigning credit to good algorithm components, or locating and improving ineffective algorithm components. The artificial intelligence and engineering communities have been slow to accept evolutionary computation as a robust problem-solving method because, unlike cased-based systems, rule-based systems or belief networks, they are unable to follow the algorithm's reasoning when locating a set of solutions in the problem space. During an evolutionary algorithm's execution the user may be able to see the results of the search but the search process itself like is a "black box" to the user. It is the search behaviour of evolutionary algorithms that needs to be understood by the user, in order for evolutionary computation to become more accepted within these communities. The aim of software visualization is to help people understand and use computer software. Software visualization technology has been applied successfully to illustrate a variety of heuristic search algorithms, programming languages and data structures. This thesis adopts software visualization as an approach for illustrating the search behaviour of evolutionary algorithms. Genetic Algorithms ("GAs") are used here as a specific case study to illustrate how software visualization may be applied to evolutionary computation. A set of visualization requirements are derived from the findings of a GA user study. A number of search space visualization techniques are examined for illustrating the search behaviour of a GA. "Henson," an extendable framework for developing visualization tools for genetic algorithms is presented. Finally, the application of the Henson framework is illustrated by the development of "Gonzo," a visualization tool designed to enable GA users to explore their algorithm's search behaviour. The contributions made in this thesis extend into the areas of software visualization, evolutionary computation and the psychology of programming. The GA user study presented here is the first and only known study of the working practices of GA users. The search space visualization techniques proposed here have never been applied in this domain before, and the resulting interactive visualizations provide the GA user with a previously unavailable insight into their algorithm's operation

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare

    Full text link
    Nature-Inspired Computing or NIC for short is a relatively young field that tries to discover fresh methods of computing by researching how natural phenomena function to find solutions to complicated issues in many contexts. As a consequence of this, ground-breaking research has been conducted in a variety of domains, including synthetic immune functions, neural networks, the intelligence of swarm, as well as computing of evolutionary. In the domains of biology, physics, engineering, economics, and management, NIC techniques are used. In real-world classification, optimization, forecasting, and clustering, as well as engineering and science issues, meta-heuristics algorithms are successful, efficient, and resilient. There are two active NIC patterns: the gravitational search algorithm and the Krill herd algorithm. The study on using the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in medicine and healthcare is given a worldwide and historical review in this publication. Comprehensive surveys have been conducted on some other nature-inspired algorithms, including KH and GSA. The various versions of the KH and GSA algorithms and their applications in healthcare are thoroughly reviewed in the present article. Nonetheless, no survey research on KH and GSA in the healthcare field has been undertaken. As a result, this work conducts a thorough review of KH and GSA to assist researchers in using them in diverse domains or hybridizing them with other popular algorithms. It also provides an in-depth examination of the KH and GSA in terms of application, modification, and hybridization. It is important to note that the goal of the study is to offer a viewpoint on GSA with KH, particularly for academics interested in investigating the capabilities and performance of the algorithm in the healthcare and medical domains.Comment: 35 page

    04081 Abstracts Collection -- Theory of Evolutionary Algorithms

    Get PDF
    From 15.02.04 to 20.02.04, the Dagstuhl Seminar 04081 ``Theory of Evolutionary Algorithms\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Agent Based Models of Competition and Collaboration

    Get PDF
    Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency

    The Grid Sketcher: An AutoCad-based tool for conceptual design processes

    Full text link
    Sketching with pencil and paper is reminiscent of the varied, rich, and loosely defined formal processes associated with conceptual design. Architects actively engage such creative paradigms in their exploration and development of conceptual design solutions. The Grid Sketcher, as a conceptual sketching tool, presents one possible computer implementation for enhancing and supporting these processes. It effectively demonstrates the facility with which current technology and the computing environment can enhance and simulate sketching intents and expectations; Typically with respect to design, the position taken is that the two are virtually void of any fundamental commonality. A designer\u27s thoughts are intuitive, at times irrational, and rarely follow consistently identifiable patterns. Conversely, computing requires predictability in just these endeavors. The computing environment, as commonly defined, can not reasonably expect to mimic the typically human domain of creative design. In this context, this thesis accentuates the computer\u27s role as a form generator as opposed to a form evaluator. The computer, under the influence of certain contextual parameters can, however, provide the designer with a rich and elegant set of forms that respond through algorithmics to the designer\u27s creative intents. (Abstract shortened by UMI.)

    The application of multiobjective optimisation to protein-ligand docking

    Get PDF
    Despite the intense efforts that have been devoted to the development of scoring functions for protein-ligand docking, they are still limited in their ability to identify the correct binding pose of a ligand within a protein binding site. A deeper understanding of the intricacies of scoring functions is therefore essential in order to develop these effectively. The aim of the work described in this thesis is to analyse the individual interaction energy types which form the individual components of a force field-based scoring function. To do this, & protein-ligand docking algorithm that is based on multiobjective optimisation has been developed. Multiobjective optimisation allows for the optimisation of several objectives simultaneously and this has been applied to the individual interaction energy types of the GRID scoring function. Traditionally these interaction energy types are summed together and the total energy is used to guide the search. By using individual energy types during optimisation, their roles can be better understood. The interaction energy types that have been used here are the electrostatic and hydrogen bond interactions combined, and van der Waals interactions. The algorithm is first tested on two datasets containing twenty complexes. The results show that the different interaction energy types have varying influences when it comes to successfully docking certain complexes, and that it is important to fmd the right balance of interaction energy types so as to find correct solutions. Ofthe twenty complexes, the algorithm found correct solutions for fifteen. To improve the performance of the algorithm, a few enhancements were introduced. This includes a simplex minimisation process with a Lamarckian element. The algorithm was retested on the twenty complexes, and the newer version was found to outperform the original version, finding correct solutions for seventeen of the twenty complexes. To extensively study the capabilities of the algorithm, it was tested on varied datasets, including the FlexX dataset. The algoritlun's performance was also compared to a single-objective docking tool, Q-fit. The comparison betw~en the multiobjective and single-objective methodologies revealed that single-objective methods can sometimes fail at finding correct docked solutions because they are unable to correctly balance the interaction energy types comprising a scoring function. The study also showed that a multiobjective optimisation method can reveal the reasons why a given docking algorithm may fail at fmding a correct solution. Finally, the algorithm was extended to incorporate desolvation energy as a third objective. Though these results are preliminary, they revealed some interesting relationships between the different objectives.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Agent Based Models of Competition and Collaboration

    Get PDF
    Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Evaluation and optimisation of traction system for hybrid railway vehicles

    Get PDF
    Over the past decade, energy and environmental sustainability in urban rail transport have become increasingly important. Hybrid transportation systems present a multifaceted challenge, encompassing aspects such as hydrogen production, refuelling station infrastructure, propulsion system topology, power source sizing, and control. The evaluation and optimisation of these aspects are critical for the adaptation and commercialisation of hybrid railway vehicles. While there has been significant progress in the development of hybrid railway vehicles, further improvements in propulsion system design are necessary. This thesis explores strategies to achieve this ambitious goal by substituting diesel trains with hybrid trains. However, limited research has assessed the operational performance of replacing diesel trains with hybrid trains on the same tracks. This thesis develops various optimisation techniques for evaluating and refining the hybrid traction system to address this gap. In this research's first phase, the author developed a novel Hybrid Train Simulator designed to analyse driving performance and energy flow among multiple power sources, such as internal combustion engines, electrification, fuel cells, and batteries. The simulator incorporates a novel Automatic Smart Switching Control technique, which scales power among multiple power sources based on the route gradient for hybrid trains. This smart switching approach enhances battery and fuel cell life and reduces maintenance costs by employing it as needed, thereby eliminating the forced charging and discharging of excessively high currents. Simulation results demonstrate a 6% reduction in energy consumption for hybrid trains equipped with smart switching compared to those without it. In the second phase of this research, the author presents a novel technique to solve the optimisation problem of hybrid railway vehicle traction systems by utilising evolutionary and numerical optimisation techniques. The optimisation method employs a nonlinear programming solver, interpreting the problem via a non-convex function combined with an efficient "Mayfly algorithm." The developed hybrid optimisation algorithm minimises traction energy while using limited power to prevent unnecessary load on power sources, ensuring their prolonged life. The algorithm takes into account linear and non-linear variables, such as velocity, acceleration, traction forces, distance, time, power, and energy, to address the hybrid railway vehicle optimisation problem, focusing on the energy-time trade-off. The optimised trajectories exhibit an average reduction of 16.85% in total energy consumption, illustrating the algorithm's effectiveness across diverse routes and conditions, with an average increase in journey times of only 0.40% and a 15.18% reduction in traction power. The algorithm achieves a well-balanced energy-time trade-off, prioritising energy efficiency without significantly impacting journey duration, a critical aspect of sustainable transportation systems. In the third phase of this thesis, the author introduced artificial neural network models to solve the optimisation problem for hybrid railway vehicles. Based on time and power-based architecture, two ANN models are presented, capable of predicting optimal hybrid train trajectories. These models tackle the challenge of analysing large datasets of hybrid railway vehicles. Both models demonstrate the potential for efficiently predicting hybrid train target parameters. The results indicate that both ANN models effectively predict a hybrid train's critical parameters and trajectory, with mean errors ranging from 0.19% to 0.21%. However, the cascade-forward neural network topology in the time-based architecture outperforms the feed-forward neural network topology in terms of mean squared error and maximum error in the power-based architecture. Specifically, the cascade-forward neural network topology within the time-based structure exhibits a slightly lower MSE and maximum error than its power-based counterpart. Moreover, the study reveals the average percentage difference between the benchmark and FFNN/CNFN trajectories, highlighting that the time-based architecture exhibits lower differences (0.18% and 0.85%) compared to the power-based architecture (0.46% and 0.92%)
    corecore