5,477 research outputs found

    On the Combined Impact of Population Size and Sub-problem Selection in MOEA/D

    Get PDF
    This paper intends to understand and to improve the working principle of decomposition-based multi-objective evolutionary algorithms. We review the design of the well-established Moea/d framework to support the smooth integration of different strategies for sub-problem selection, while emphasizing the role of the population size and of the number of offspring created at each generation. By conducting a comprehensive empirical analysis on a wide range of multi-and many-objective combinatorial NK landscapes, we provide new insights into the combined effect of those parameters on the anytime performance of the underlying search process. In particular, we show that even a simple random strategy selecting sub-problems at random outperforms existing sophisticated strategies. We also study the sensitivity of such strategies with respect to the ruggedness and the objective space dimension of the target problem.Comment: European Conference on Evolutionary Computation in Combinatorial Optimization, Apr 2020, Seville, Spai

    Open Problems in (Hyper)Graph Decomposition

    Full text link
    Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. This report is a summary of discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph Decomposition" and presents currently open problems and future directions in the area of (hyper)graph decomposition

    Synergy of Physics-based Reasoning and Machine Learning in Biomedical Applications: Towards Unlimited Deep Learning with Limited Data

    Get PDF
    Technological advancements enable collecting vast data, i.e., Big Data, in science and industry including biomedical field. Increased computational power allows expedient analysis of collected data using statistical and machine-learning approaches. Historical data incompleteness problem and curse of dimensionality diminish practical value of pure data-driven approaches, especially in biomedicine. Advancements in deep learning (DL) frameworks based on deep neural networks (DNN) improved accuracy in image recognition, natural language processing, and other applications yet severe data limitations and/or absence of transfer-learning-relevant problems drastically reduce advantages of DNN-based DL. Our earlier works demonstrate that hierarchical data representation can be alternatively implemented without NN, using boosting-like algorithms for utilization of existing domain knowledge, tolerating significant data incompleteness, and boosting accuracy of low-complexity models within the classifier ensemble, as illustrated in physiological-data analysis. Beyond obvious use in initial-factor selection, existing simplified models are effectively employed for generation of realistic synthetic data for later DNN pre-training. We review existing machine learning approaches, focusing on limitations caused by training-data incompleteness. We outline our hybrid framework that leverages existing domain-expert models/knowledge, boosting-like model combination, DNN-based DL and other machine learning algorithms for drastic reduction of training-data requirements. Applying this framework is illustrated in context of analyzing physiological data

    Improving performance and the reliability of off-site pre-cast concrete production operations using simulation optimisation

    Get PDF
    The increased use of precast components in building and heavy civil engineering projects has led to the introduction of innovative management and scheduling systems to meet the demand for increased reliability, efficiency and cost reduction. The aim of this study is to develop an innovative crew allocation system that can efficiently allocate crews of workers to labour-intensive repetitive processes. The objective is to improve off-site pre-cast production operations using Multi-Layered Genetic Algorithms. The Multi-Layered concept emerged in response to the modelling requirements of different sets of labour inputs. As part of the techniques used in developing the Crew Allocation “SIM_Crew” System, a process mapping methodology is used to model the processes of precast concrete operations and to provide the framework and input required for simulation. Process simulation is then used to model and imitate all production processes, and Genetic Algorithms are embedded within the simulation model to provide a rapid and intelligent search. A Multi-Layered chromosome is used to store different sets of inputs such as crews working on different shifts and process priorities. A ‘Class Interval’ selection strategy is developed to improve the chance of selecting the most promising chromosomes for further investigation. Multi-Layered Dynamic crossover and mutation operators are developed to increase the randomness of the searching mechanism for solutions in the solution space. The results illustrate that adopting different combinations of crews of workers has a substantial impact on the labour allocation cost and this should lead to increased efficiency and lower production cost. In addition, the results of the simulation show that minimum throughput time, minimum process-waiting time and optimal resource utilisation profiles can be achieved when compared to a real-life case study

    Component-based synthesis of motion planning algorithms

    Get PDF
    Combinatory Logic Synthesis generates data or runnable programs according to formal type specifications. Synthesis results are composed based on a user-specified repository of components, which brings several advantages for representing spaces of high variability. This work suggests strategies to manage the resulting variations by proposing a domain-specific brute-force search and a machine learning-based optimization procedure. The brute-force search involves the iterative generation and evaluation of machining strategies. In contrast, machine learning optimization uses statistical models to enable the exploration of the design space. The approaches involve synthesizing programs and meta-programs that manipulate, run, and evaluate programs. The methodologies are applied to the domain of motion planning algorithms, and they include the configuration of programs belonging to different algorithmic families. The study of the domain led to the identification of variability points and possible variations. Proof-of-concept repositories represent these variability points and incorporate them into their semantic structure. The selected algorithmic families involve specific computation steps or data structures, and corresponding software components represent possible variations. Experimental results demonstrate that CLS enables synthesis-driven domain-specific optimization procedures to solve complex problems by exploring spaces of high variability.Combinatory Logic Synthesis (CLS) generiert Daten oder lauffĂ€hige Programme anhand von formalen Typspezifikationen. Die Ergebnisse der Synthese werden auf Basis eines benutzerdefinierten Repositories von Komponenten zusammengestellt, was diverse Vorteile fĂŒr die Beschreibung von RĂ€umen mit hoher VariabilitĂ€t mit sich bringt. Diese Arbeit stellt Strategien fĂŒr den Umgang mit den resultierenden Variationen vor, indem eine domĂ€nen-spezifische Brute-Force Suche und ein maschinelles Lernverfahren fĂŒr die Untersuchung eines Optimierungsproblems aufgezeigt werden. Die Brute-Force Suche besteht aus der iterativen Generierung und Evaluation von FrĂ€sstrategien. Im Gegensatz dazu nutzt der Optimierungsansatz statistische Modelle zur Erkundung des Entwurfsraums. Beide AnsĂ€tze synthetisieren Programme und Metaprogramme, welche Programme bearbeiten, ausfĂŒhren und evaluieren. Diese Methoden werden auf die DomĂ€ne der Bewegungsplanungsalgorithmen angewendet und sie beinhalten die Konfiguration von Programmen, welche zu unterschiedlichen algorithmischen Familien gehören. Die Untersuchung der DomĂ€ne fĂŒhrte zur Identifizierung der VariabilitĂ€tspunkte und der möglichen Variationen. Entsprechende Proof of Concept Implementierungen in Form von Repositories reprĂ€sentieren jene VariabilitĂ€tspunkte und beziehen diese in ihre semantische Struktur ein. Die gewĂ€hlten algorithmischen Familien sehen bestimmte Berechnungsschritte oder Datenstrukturen vor, und entsprechende Software Komponenten stellen mögliche Variationen dar. Versuchsergebnisse belegen, dass CLS synthese-getriebene domĂ€nenspezifische Optimierungsverfahren ermöglicht, welche komplexe Probleme durch die Exploration von RĂ€umen hoher VariabilitĂ€t lösen

    A new dominance relation-based evolutionary algorithm for many-objective optimization

    Get PDF
    • 

    corecore