183 research outputs found

    Brief history of natural sciences for nature-inspired computing in engineering

    Get PDF
    The goal of the authors is adroit integration of three mainstream disciplines of the natural sciences, physics, chemistry and biology to create novel problem solving paradigms. This paper presents a brief history of the develop­ment of the natural sciences and highlights some milestones which subsequently influenced many branches of science, engineering and computing as a prelude to nature-inspired computing which has captured the imagination of computing researchers in the past three decades. The idea is to summarize the massive body of knowledge in a single paper suc­cinctly. The paper is organised into three main sections: developments in physics, developments in chemistry, and de­velopments in biology. Examples of recently-proposed computing approaches inspired by the three branches of natural sciences are provided

    Structural optimization in steel structures, algorithms and applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Problem Decomposition and Adaptation in Cooperative Neuro-Evolution

    No full text
    One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness
    corecore