97 research outputs found

    Scalable parallel evolutionary optimisation based on high performance computing

    Get PDF
    Evolutionary algorithms (EAs) have been successfully applied to solve various challenging optimisation problems. Due to their stochastic nature, EAs typically require considerable time to find desirable solutions; especially for increasingly complex and large-scale problems. As a result, many works studied implementing EAs on parallel computing facilities to accelerate the time-consuming processes. Recently, the rapid development of modern parallel computing facilities such as the high performance computing (HPC) bring not only unprecedented computational capabilities but also challenges on designing parallel algorithms. This thesis mainly focuses on designing scalable parallel evolutionary optimisation (SPEO) frameworks which run efficiently on the HPC. Motivated by the interesting phenomenon that many EAs begin to employ increasingly large population sizes, this thesis firstly studies the effect of a large population size through comprehensive experiments. Numerical results indicate that a large population benefits to the solving of complex problems but requires a large number of maximal fitness evaluations (FEs). However, since sequential EAs usually requires a considerable computing time to achieve extensive FEs, we propose a scalable parallel evolutionary optimisation framework that can efficiently deploy parallel EAs over many CPU cores at CPU-only HPC. On the other hand, since EAs using a large number of FEs can produce massive useful information in the course of evolution, we design a surrogate-based approach to learn from this historical information and to better solve complex problems. Then this approach is implemented in parallel based on the proposed scalable parallel framework to achieve remarkable speedups. Since demanding a great computing power on CPU-only HPC is usually very expensive, we design a framework based on GPU-enabled HPC to improve the cost-effectiveness of parallel EAs. The proposed framework can efficiently accelerate parallel EAs using many GPUs and can achieve superior cost-effectiveness. However, since it is very challenging to correctly implement parallel EAs on the GPU, we propose a set of guidelines to verify the correctness of GPU-based EAs. In order to examine these guidelines, they are employed to verify a GPU-based brain storm optimisation that is also proposed in this thesis. In conclusion, the comprehensively experimental study is firstly conducted to investigate the impacts of a large population. After that, a SPEO framework based on CPU-only HPC is proposed and is employed to accelerate a time-consuming implementation of EA. Finally, the correctness verification of implementing EAs based on a single GPU is discussed and the SPEO framework is then extended to be deployed based on GPU-enabled HPC

    Radar Target Classification Using an Evolutionary Extreme Learning Machine Based on Improved Quantum-Behaved Particle Swarm Optimization

    Get PDF
    A novel evolutionary extreme learning machine (ELM) based on improved quantum-behaved particle swarm optimization (IQPSO) for radar target classification is presented in this paper. Quantum-behaved particle swarm optimization (QPSO) has been used in ELM to solve the problem that ELM needs more hidden nodes than conventional tuning-based learning algorithms due to the random set of input weights and hidden biases. But the method for calculating the characteristic length of Delta potential well of QPSO may reduce the global search ability of the algorithm. To solve this issue, a new method to calculate the characteristic length of Delta potential well is proposed in this paper. Experimental results based on the benchmark functions validate the better performance of IQPSO against QPSO in most cases. The novel algorithm is also evaluated by using real-world datasets and radar data; the experimental results indicate that the proposed algorithm is more effective than BP, SVM, ELM, QPSO-ELM, and so on, in terms of real-time performance and accuracy

    Design and implementation of machine learning techniques for modeling and managing battery energy storage systems

    Get PDF
    The fast technological evolution and industrialization that have interested the humankind since the fifties has caused a progressive and exponential increase of CO2 emissions and Earth temperature. Therefore, the research community and the political authorities have recognized the need of a deep technological revolution in both the transportation and the energy distribution systems to hinder climate changes. Thus, pure and hybrid electric powertrains, smart grids, and microgrids are key technologies for achieving the expected goals. Nevertheless, the development of the above mentioned technologies require very effective and performing Battery Energy Storage Systems (BESSs), and even more effective Battery Management Systems (BMSs). Considering the above background, this Ph.D. thesis has focused on the development of an innovative and advanced BMS that involves the use of machine learning techniques for improving the BESS effectiveness and efficiency. Great attention has been paid to the State of Charge (SoC) estimation problem, aiming at investigating solutions for achieving more accurate and reliable estimations. To this aim, the main contribution has concerned the development of accurate and flexible models of electrochemical cells. Three main modeling requirements have been pursued for ensuring accurate SoC estimations: insight on the cell physics, nonlinear approximation capability, and flexible system identification procedures. Thus, the research activity has aimed at fulfilling these requirements by developing and investigating three different modeling approaches, namely black, white, and gray box techniques. Extreme Learning Machines, Radial Basis Function Neural Networks, and Wavelet Neural Networks were considered among the black box models, but none of them were able to achieve satisfactory SoC estimation performances. The white box Equivalent Circuit Models (ECMs) have achieved better results, proving the benefit that the insight on the cell physics provides to the SoC estimation task. Nevertheless, it has appeared clear that the linearity of ECMs has reduced their effectiveness in the SoC task. Thus, the gray box Neural Networks Ensemble (NNE) and the white box Equivalent Neural Networks Circuit (ENNC) models have been developed aiming at exploiting the neural networks theory in order to achieve accurate models, ensuring at the same time very flexible system identification procedures together with nonlinear approximation capabilities. The performances of NNE and ENNC have been compelling. In particular, the white box ENNC has reached the most effective performances, achieving accurate SoC estimations, together with a simple architecture and a flexible system identification procedure. The outcome of this thesis makes it possible the development of an interesting scenario in which a suitable cloud framework provides remote assistance to several BMSs in order to adapt the managing algorithms to the aging of BESSs, even considering different and distinct applications

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Voting based double-weighted deterministic extreme learning machine model and its application

    Get PDF
    This study introduces an intelligent learning model for classification tasks, termed the voting-based Double Pseudo-inverse Extreme Learning Machine (V-DPELM) model. Because the traditional method is affected by the weight of input layer and the bias of hidden layer, the number of hidden layer neurons is too large and the model performance is unstable. The V-DPELM model proposed in this paper can greatly alleviate the limitations of traditional models because of its direct determination of weight structure and voting mechanism strategy. Through extensive simulations on various real-world classification datasets, we observe a marked improvement in classification accuracy when comparing the V-DPELM algorithm to traditional V-ELM methods. Notably, when used for machine recognition classification of breast tumors, the V-DPELM method demonstrates superior classification accuracy, positioning it as a valuable tool in machine-assisted breast tumor diagnosis models

    Energy Efficient Neocortex-Inspired Systems with On-Device Learning

    Get PDF
    Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices. This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power. Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life
    • …
    corecore