1,158 research outputs found

    DEFEG: deep ensemble with weighted feature generation.

    Get PDF
    With the significant breakthrough of Deep Neural Networks in recent years, multi-layer architecture has influenced other sub-fields of machine learning including ensemble learning. In 2017, Zhou and Feng introduced a deep random forest called gcForest that involves several layers of Random Forest-based classifiers. Although gcForest has outperformed several benchmark algorithms on specific datasets in terms of classification accuracy and model complexity, its input features do not ensure better performance when going deeply through layer-by-layer architecture. We address this limitation by introducing a deep ensemble model with a novel feature generation module. Unlike gcForest where the original features are concatenated to the outputs of classifiers to generate the input features for the subsequent layer, we integrate weights on the classifiers’ outputs as augmented features to grow the deep model. The usage of weights in the feature generation process can adjust the input data of each layer, leading the better results for the deep model. We encode the weights using variable-length encoding and develop a variable-length Particle Swarm Optimisation method to search for the optimal values of the weights by maximizing the classification accuracy on the validation data. Experiments on a number of UCI datasets confirm the benefit of the proposed method compared to some well-known benchmark algorithms

    A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends

    Get PDF
    Computer vision (CV) is a big and important field in artificial intelligence covering a wide range of applications. Image analysis is a major task in CV aiming to extract, analyse and understand the visual content of images. However, imagerelated tasks are very challenging due to many factors, e.g., high variations across images, high dimensionality, domain expertise requirement, and image distortions. Evolutionary computation (EC) approaches have been widely used for image analysis with significant achievement. However, there is no comprehensive survey of existing EC approaches to image analysis. To fill this gap, this paper provides a comprehensive survey covering all essential EC approaches to important image analysis tasks including edge detection, image segmentation, image feature analysis, image classification, object detection, and others. This survey aims to provide a better understanding of evolutionary computer vision (ECV) by discussing the contributions of different approaches and exploring how and why EC is used for CV and image analysis. The applications, challenges, issues, and trends associated to this research field are also discussed and summarised to provide further guidelines and opportunities for future research

    Simulation, optimization and instrumentation of agricultural biogas plants

    Get PDF
    During the last two decades, the production of renewable energy by anaerobic digestion (AD) in biogas plants has become increasingly popular due to its applicability to a great variety of organic material from energy crops and animal waste to the organic fraction of Municipal Solid Waste (MSW), and to the relative simplicity of AD plant designs. Thus, a whole new biogas market emerged in Europe, which is strongly supported by European and national funding and remuneration schemes. Nevertheless, stable and efficient operation and control of biogas plants can be challenging, due to the high complexity of the biochemical AD process, varying substrate quality and a lack of reliable online instrumentation. In addition, governmental support for biogas plants will decrease in the long run and the substrate market will become highly competitive. The principal aim of the research presented in this thesis is to achieve a substantial improvement in the operation of biogas plants. At first, a methodology for substrate inflow optimization of full-scale biogas plants is developed based on commonly measured process variables and using dynamic simulation models as well as computational intelligence (CI) methods. This methodology which is appliquable to a broad range of different biogas plants is then followed by an evaluation of existing online instrumentation for biogas plants and the development of a novel UV/vis spectroscopic online measurement system for volatile fatty acids. This new measurement system, which uses powerful machine learning techniques, provides a substantial improvement in online process monitoring for biogas plants. The methodologies developed and results achieved in the areas of simulation and optimization were validated at a full-scale agricultural biogas plant showing that global optimization of the substrate inflow based on dynamic simulation models is able to improve the yearly profit of a biogas plant by up to 70%. Furthermore, the validation of the newly developed online measurement for VFA concentration at an industrial biogas plant showed that a measurement accuracy of 88% is possible using UV/vis spectroscopic probes

    Design synthesis of complex ship structures

    Get PDF

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    Ensemble learning based on classifier prediction confidence and comprehensive learning particle swarm optimisation for medical image segmentation.

    Get PDF
    Segmentation, a process of partitioning an image into multiple segments to locate objects and boundaries, is considered one of the most essential medical imaging process. In recent years, Deep Neural Networks (DNN) have achieved many notable successes in medical image analysis, including image segmentation. Due to the fact that medical imaging applications require robust, reliable results, it is necessary to devise effective DNN models for medical applications. One solution is to combine multiple DNN models in an ensemble system to obtain better results than using each single DNN model. Ensemble learning is a popular machine learning technique in which multiple models are combined to improve the final results and has been widely used in medical image analysis. In this paper, we propose to measure the confidence in the prediction of each model in the ensemble system and then use an associate threshold to determine whether the confidence is acceptable or not. A segmentation model is selected based on the comparison between the confidence and its associated threshold. The optimal threshold for each segmentation model is found by using Comprehensive Learning Particle Swarm Optimisation (CLPSO), a swarm intelligence algorithm. The Dice coefficient, a popular performance metric for image segmentation, is used as the fitness criteria. The experimental results on three medical image segmentation datasets confirm that our ensemble achieves better results compared to some well-known segmentation models

    Evolving Deep DenseBlock Architecture Ensembles for Image Classification

    Get PDF
    Automatic deep architecture generation is a challenging task, owing to the large number of controlling parameters inherent in the construction of deep networks. The combination of these parameters leads to the creation of large, complex search spaces that are feasibly impossible to properly navigate without a huge amount of resources for parallelisation. To deal with such challenges, in this research we propose a Swarm Optimised DenseBlock Architecture Ensemble (SODBAE) method, a joint optimisation and training process that explores a constrained search space over a skeleton DenseBlock Convolutional Neural Network (CNN) architecture. Specifically, we employ novel weight inheritance learning mechanisms, a DenseBlock skeleton architecture, as well as adaptive Particle Swarm Optimisation (PSO) with cosine search coefficients to devise networks whilst maintaining practical computational costs. Moreover, the architecture design takes advantage of recent advancements of the concepts of residual connections and dense connectivity, in order to yield CNN models with a much wider variety of structural variations. The proposed weight inheritance learning schemes perform joint optimisation and training of the architectures to reduce the computational costs. Being evaluated using the CIFAR-10 dataset, the proposed model shows great superiority in classification performance over other state-of-the-art methods while illustrating a greater versatility in architecture generation

    Surrogate model for real time signal control: theories and applications

    Get PDF
    Traffic signal controls play a vital role in urban road traffic networks. Compared with fixed-time signal control, which is solely based on historical data, real time signal control is flexible and responsive to varying traffic conditions, and hence promises better performance and robustness in managing traffic congestion. Real time signal control can be divided into model-based and model-free approaches. The former requires a traffic model (analytical or simulation-based) in the generation, optimisation and evaluation of signal control plans, which means that its efficacy in real-world deployment depends on the validity and accuracy of the underlying traffic model. Model-free real time signal control, on the other hand, is constructed based on expert experience and empirical observations. Most of the existing model-free real time signal controls, however, focus on learning-based and rule-based approaches, and either lack interpretability or are non-optimised. This thesis proposes a surrogate-based real time signal control and optimisation framework, that can determine signal decisions in a centralised manner without the use of any traffic model. Surrogate models offer analytical and efficient approximations of complex models or black-box processes by fitting their input-output structures with appropriate mathematical tools. Current research on surrogate-based optimisation is limited to strategic and off-line optimisation, which only approximates the relationship between decisions and outputs under highly specific conditions based on certain traffic simulation models and is still to be attempted for real time optimisation. This thesis proposes a framework for surrogate-based real time signal control, by constructing a response surface that encompasses, (1) traffic states, (2) control parameters, and (3) network performance indicators at the same time. A series of comprehensive evaluations are conducted to assess the effectiveness, robustness and computational efficiency of the surrogate-based real time signal control. In the numerical test, the Kriging model is selected to approximate the traffic dynamics of the test network. The results show that this Kriging-based real time signal control can increase the total throughput by 5.3% and reduce the average delay by 8.1% compared with the fixed-time baseline signal plan. In addition, the optimisation time can be reduced by more than 99% if the simulation model is replaced by a Kriging model. The proposed signal controller is further investigated via multi-scenario analyses involving different levels of information availability, network saturation and traffic uncertainty, which shows the robustness and reliability of the controller. Moreover, the influence of the baseline signal on the Kriging-based signal control can be eliminated by a series of off-line updates. By virtue of the model-free nature and the adaptive learning capability of the surrogate model, the Kriging-based real time signal control can adapt to systematic network changes (such as seasonal variations in traffic demand). The adaptive Kriging-based real time signal control can update the response surface according to the feedback from the actual traffic environment. The test results show that the adaptive Kriging-based real time signal control maintains the signal control performance better in response to systematic network changes than either fixed-time signal control or non-adaptive Kriging-based signal control.Open Acces
    • …
    corecore