21 research outputs found

    EvoFed: Leveraging Evolutionary Strategies for Communication-Efficient Federated Learning

    Full text link
    Federated Learning (FL) is a decentralized machine learning paradigm that enables collaborative model training across dispersed nodes without having to force individual nodes to share data. However, its broad adoption is hindered by the high communication costs of transmitting a large number of model parameters. This paper presents EvoFed, a novel approach that integrates Evolutionary Strategies (ES) with FL to address these challenges. EvoFed employs a concept of 'fitness-based information sharing', deviating significantly from the conventional model-based FL. Rather than exchanging the actual updated model parameters, each node transmits a distance-based similarity measure between the locally updated model and each member of the noise-perturbed model population. Each node, as well as the server, generates an identical population set of perturbed models in a completely synchronized fashion using the same random seeds. With properly chosen noise variance and population size, perturbed models can be combined to closely reflect the actual model updated using the local dataset, allowing the transmitted similarity measures (or fitness values) to carry nearly the complete information about the model parameters. As the population size is typically much smaller than the number of model parameters, the savings in communication load is large. The server aggregates these fitness values and is able to update the global model. This global fitness vector is then disseminated back to the nodes, each of which applies the same update to be synchronized to the global model. Our analysis shows that EvoFed converges, and our experimental results validate that at the cost of increased local processing loads, EvoFed achieves performance comparable to FedAvg while reducing overall communication requirements drastically in various practical settings

    Computational and Exploratory Landscape Analysis of the GKLS Generator

    Full text link
    The GKLS generator is one of the most used testbeds for benchmarking global optimization algorithms. In this paper, we conduct both a computational analysis and the Exploratory Landscape Analysis (ELA) of the GKLS generator. We utilize both canonically used and newly generated classes of GKLS-generated problems and show their use in benchmarking three state-of-the-art methods (from evolutionary and deterministic communities) in dimensions 5 and 10. We show that the GKLS generator produces ``needle in a haystack'' type problems that become extremely difficult to optimize in higher dimensions. Furthermore, we conduct the ELA on the GKLS generator and then compare it to the ELA of two other widely used benchmark sets (BBOB and CEC 2014), and discuss the meaningfulness of the results

    Hardest Monotone Functions for Evolutionary Algorithms

    Full text link
    The study of hardest and easiest fitness landscapes is an active area of research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting (1,λ)(1,\lambda)-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize. We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than n/2n/2, where nn denotes the dimension of the search space. We show, using a combinatorial argument, that for the (1+1)(1+1)-EA with any mutation rate p[0,1]p \in [0,1], SDBV is drift-minimizing among the class of dynamic monotone functions. Our construction provides the first explicit example of an instance of the partially-ordered evolutionary algorithm (PO-EA) model with parameterized pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen. We further show that the (1+1)(1+1)-EA optimizes SDBV in Θ(n3/2)\Theta(n^{3/2}) generations. Our simulations demonstrate matching runtimes for both static and self-adjusting (1,λ)(1,\lambda) and (1+λ)(1+\lambda)-EA. We further show, using an example of fixed dimension, that drift-minimization does not equal maximal runtime

    Understanding Trade-offs in Stellarator Design with Multi-objective Optimization

    Full text link
    In designing stellarators, any design decision ultimately comes with a trade-off. Improvements in particle confinement, for instance, may increase the burden on engineers to build more complex coils, and the tightening of financial constraints may simplify the design and worsen some aspects of transport. Understanding trade-offs in stellarator designs is critical in designing high performance devices that satisfy the multitude of physical, engineering, and financial criteria. In this study we show how multi-objective optimization (MOO) can be used to investigate trade-offs and develop insight into the role of design parameters. We discuss the basics of MOO, as well as practical solution methods for solving MOO problems. We apply these methods to bring insight into the selection of two common design parameters: the aspect ratio of an ideal magnetohydrodynamic equilibrium, and the total length of the electromagnetic coils

    Hybrid linkage learning for permutation optimization with Gene-pool optimal mixing evolutionary algorithms

    Get PDF
    Linkage learning techniques are employed to discover dependencies between problem variables. This knowledge can then be leveraged in an Evolutionary Algorithm (EA) to improve the optimization process. Of particular interest is the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) family, which has been shown to exploit linkage effectively. Recently, Empirical Linkage Learning (ELL) techniques were proposed for binary-encoded problems. While these techniques are computationally expensive, they have the benefit of never reporting spurious dependencies (false linkages), i.e., marking two independent variables as being dependent. However, previous research shows that despite this property, for some problems, it is more suitable to employ more commonly-used Statistical-based Linkage Learning (SLL) techniques. Therefore, we propose to use both ELL and SLL in the form of Hybrid Linkage Learning (HLL). We also propose (for the first time) a variant of ELL for permutation problems. Using a wide range of problems and different GOMEA variants, we find that also for permutation problems, in some cases, ELL is more advantageous to use while SLL is more advantageous in other cases. However, we also find that employing the proposed HLL leads to results that are better or equal than the results obtained with SLL for all the considered problems

    An agent-based model of hierarchic genetic search

    Get PDF
    AbstractAn effective exploration of the large search space by single population genetic-based metaheuristics may be a very time consuming and complex process, especially in the case of dynamic changes in the system states. Speeding up the search process by the metaheuristic parallelisation must have a significant negative impact on the search accuracy.There is still a lack of complete formal models for parallel genetic and evolutionary techniques, which might support the parameter setting and improve the whole (often very complex) structure management.In this paper, we define a mathematical model of Hierarchical Genetic Search (HGS) based on the genetic multi-agent system paradigm. The model has a decentralised population management mechanism and the relationship among the parallel genetic processes has a multi-level tree structure. Each process in this tree is Markov-type and the conditions of the commutation of the Markovian kernels in HGS branches are formulated

    Explainable Predictive Maintenance

    Full text link
    Explainable Artificial Intelligence (XAI) fills the role of a critical interface fostering interactions between sophisticated intelligent systems and diverse individuals, including data scientists, domain experts, end-users, and more. It aids in deciphering the intricate internal mechanisms of ``black box'' Machine Learning (ML), rendering the reasons behind their decisions more understandable. However, current research in XAI primarily focuses on two aspects; ways to facilitate user trust, or to debug and refine the ML model. The majority of it falls short of recognising the diverse types of explanations needed in broader contexts, as different users and varied application areas necessitate solutions tailored to their specific needs. One such domain is Predictive Maintenance (PdM), an exploding area of research under the Industry 4.0 \& 5.0 umbrella. This position paper highlights the gap between existing XAI methodologies and the specific requirements for explanations within industrial applications, particularly the Predictive Maintenance field. Despite explainability's crucial role, this subject remains a relatively under-explored area, making this paper a pioneering attempt to bring relevant challenges to the research community's attention. We provide an overview of predictive maintenance tasks and accentuate the need and varying purposes for corresponding explanations. We then list and describe XAI techniques commonly employed in the literature, discussing their suitability for PdM tasks. Finally, to make the ideas and claims more concrete, we demonstrate XAI applied in four specific industrial use cases: commercial vehicles, metro trains, steel plants, and wind farms, spotlighting areas requiring further research.Comment: 51 pages, 9 figure

    Large neighbourhood search with adaptive guided ejection search for the pickup and delivery problem with time windows

    Get PDF
    An effective and fast hybrid metaheuristic is proposed for solving the pickup and delivery problem with time windows. The proposed approach combines local search, large neighbourhood search and guided ejection search in a novel way to exploit the benefits of each method. The local search component uses a novel neighbourhood operator. A streamlined implementation of large neighbourhood search is used to achieve an effective balance between intensification and diversification. The adaptive ejection chain component perturbs the solution and uses increased or decreased computation time according to the progress of the search. While the local search and large neighbourhood search focus on minimising travel distance, the adaptive ejection chain seeks to reduce the number of routes. The proposed algorithm design results in an effective and fast solution method that finds a large number of new best known solutions on a well-known benchmark data set. Experiments are also performed to analyse the benefits of the components and heuristics and their combined use in order to achieve a better understanding of how to better tackle the subject problem
    corecore