316 research outputs found

    NIHBA : A network interdiction approach for metabolic engineering design

    Get PDF
    Funding Information: This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) for funding project ‘Synthetic Portabolomics: Leading the way at the crossroads of the Digital and the Bio Economies (EP/N031962/1)’. N.K. was funded by a Royal Academy of Engineering Chair in Emerging Technology award.Peer reviewedPublisher PD

    Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment

    Full text link
    The ongoing advancements in network architecture design have led to remarkable achievements in deep learning across various challenging computer vision tasks. Meanwhile, the development of neural architecture search (NAS) has provided promising approaches to automating the design of network architectures for lower prediction error. Recently, the emerging application scenarios of deep learning have raised higher demands for network architectures considering multiple design criteria: number of parameters/floating-point operations, and inference latency, among others. From an optimization point of view, the NAS tasks involving multiple design criteria are intrinsically multiobjective optimization problems; hence, it is reasonable to adopt evolutionary multiobjective optimization (EMO) algorithms for tackling them. Nonetheless, there is still a clear gap confining the related research along this pathway: on the one hand, there is a lack of a general problem formulation of NAS tasks from an optimization point of view; on the other hand, there are challenges in conducting benchmark assessments of EMO algorithms on NAS tasks. To bridge the gap: (i) we formulate NAS tasks into general multi-objective optimization problems and analyze the complex characteristics from an optimization point of view; (ii) we present an end-to-end pipeline, dubbed EvoXBench\texttt{EvoXBench}, to generate benchmark test problems for EMO algorithms to run efficiently -- without the requirement of GPUs or Pytorch/Tensorflow; (iii) we instantiate two test suites comprehensively covering two datasets, seven search spaces, and three hardware devices, involving up to eight objectives. Based on the above, we validate the proposed test suites using six representative EMO algorithms and provide some empirical analyses. The code of EvoXBench\texttt{EvoXBench} is available from \href\href{https://github.com/EMI-Group/EvoXBench}{\rm{here}}

    A self-organizing weighted optimization based framework for large-scale multi-objective optimization

    Get PDF
    The solving of large-scale multi-objective optimization problem (LSMOP) has become a hot research topic in evolutionary computation. To better solve this problem, this paper proposes a self-organizing weighted optimization based framework, denoted S-WOF, for addressing LSMOPs. Compared to the original framework, there are two main improvements in our work. Firstly, S-WOF simplifies the evolutionary stage into one stage, in which the evaluating numbers of weighted based optimization and normal optimization approaches are adaptively adjusted based on the current evolutionary state. Specifically, regarding the evaluating number for weighted based optimization (i.e., t1), it is larger when the population is in the exploitation state, which aims to accelerate the convergence speed, while t1 is diminishing when the population is switching to the exploration state, in which more attentions are put on the diversity maintenance. On the other hand, regarding the evaluating number for original optimization (i.e., t2), which shows an opposite trend to t1, it is small during the exploitation stage but gradually increases later. In this way, a dynamic trade-off between convergence and diversity is achieved in S-WOF. Secondly, to further improve the search ability in the large-scale decision space, an efficient competitive swarm optimizer (CSO) is implemented in S-WOF, which shows efficiency for solving LSMOPs. Finally, the experimental results have validated the superiority of S-WOF over several state-of-the-art large-scale evolutionary algorithms

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    A Survey on Multi-Objective Neural Architecture Search

    Full text link
    Recently, the expert-crafted neural architectures is increasing overtaken by the utilization of neural architecture search (NAS) and automatic generation (and tuning) of network structures which has a close relation to the Hyperparameter Optimization and Auto Machine Learning (AutoML). After the earlier NAS attempts to optimize only the prediction accuracy, Multi-Objective Neural architecture Search (MONAS) has been attracting attentions which considers more goals such as computational complexity, power consumption, and size of the network for optimization, reaching a trade-off between the accuracy and other features like the computational cost. In this paper, we present an overview of principal and state-of-the-art works in the field of MONAS. Starting from a well-categorized taxonomy and formulation for the NAS, we address and correct some miscategorizations in previous surveys of the NAS field. We also provide a list of all known objectives used and add a number of new ones and elaborate their specifications. We have provides analyses about the most important objectives and shown that the stochastic properties of some the them should be differed from deterministic ones in the multi-objective optimization procedure of NAS. We finalize this paper with a number of future directions and topics in the field of MONAS.Comment: 22 pages, 10 figures, 9 table

    Multi-Guide Particle Swarm Optimization for Large-Scale Multi-Objective Optimization Problems

    Get PDF
    Multi-guide particle swarm optimization (MGPSO) is a novel metaheuristic for multi-objective optimization based on particle swarm optimization (PSO). MGPSO has been shown to be competitive when compared with other state-of-the-art multi-objective optimization algorithms for low-dimensional problems. However, to the best of the author’s knowledge, the suitability of MGPSO for high-dimensional multi-objective optimization problems has not been studied. One goal of this thesis is to provide a scalability study of MGPSO in order to evaluate its efficacy for high-dimensional multi-objective optimization problems. It is observed that while MGPSO has comparable performance to state-of-the-art multi-objective optimization algorithms, it experiences a performance drop with the increase in the problem dimensionality. Therefore, a main contribution of this work is a new scalable MGPSO-based algorithm, termed cooperative co-evolutionary multi-guide particle swarm optimization (CCMGPSO), that incorporates ideas from cooperative PSOs. A detailed empirical study on well-known benchmark problems comparing the proposed improved approach with various state-of-the-art multi-objective optimization algorithms is done. Results show that the proposed CCMGPSO is highly competitive for high-dimensional problems

    APPLICATION OF EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION ALGORITHM TO DYNAMIC BINARY NEURAL NETWORKS

    Get PDF
    This paper studies application of an evolutionary multi-objective optimization algorithm. A dynamic binary neural networks is characterized by the signum activation function and ternary connection parameters. Depending on the connection parameters, this network can generate various binary periodic orbits. In order to evaluate the performance, we consider the bi-objective problem corresponding to stability of the binary periodic orbits and sparsity of the connection parameters. Although uni-objective optimization problems require the optimization of only one objective, multi-objective optimization problems require the simultaneous optimization of multiple objectives. In order to optimize the bi-objective problem, we present a multiobjective evolutionary algorithm based on decomposition. This algorithm decomposes the bi-objective problem into multiple subproblems and can optimize the problem effectively. Performing elementary numerical experiments for typical examples of binary periodic orbits, it is confirmed that the algorithm realizes both strong orbit stability and appropriate connection sparsity. It is also confirmed that the algorithm outperforms another algorithm based on the Lasso regularization
    corecore