52 research outputs found

    Analysis of physiological signals using machine learning methods

    Get PDF
    Technological advances in data collection enable scientists to suggest novel approaches, such as Machine Learning algorithms, to process and make sense of this information. However, during this process of collection, data loss and damage can occur for reasons such as faulty device sensors or miscommunication. In the context of time-series data such as multi-channel bio-signals, there is a possibility of losing a whole channel. In such cases, existing research suggests imputing the missing parts when the majority of data is available. One way of understanding and classifying complex signals is by using deep neural networks. The hyper-parameters of such models have been optimised using the process of back propagation. Over time, improvements have been suggested to enhance this algorithm. However, an essential drawback of the back propagation can be the sensitivity to noisy data. This thesis proposes two novel approaches to address the missing data challenge and back propagation drawbacks: First, suggesting a gradient-free model in order to discover the optimal hyper-parameters of a deep neural network. The complexity of deep networks and high-dimensional optimisation parameters presents challenges to find a suitable network structure and hyper-parameter configuration. This thesis proposes the use of a minimalist swarm optimiser, Dispersive Flies Optimisation(DFO), to enable the selected model to achieve better results in comparison with the traditional back propagation algorithm in certain conditions such as limited number of training samples. The DFO algorithm offers a robust search process for finding and determining the hyper-parameter configurations. Second, imputing whole missing bio-signals within a multi-channel sample. This approach comprises two experiments, namely the two-signal and five-signal imputation models. The first experiment attempts to implement and evaluate the performance of a model mapping bio-signals from A toB and vice versa. Conceptually, this is an extension to transfer learning using CycleGenerative Adversarial Networks (CycleGANs). The second experiment attempts to suggest a mechanism imputing missing signals in instances where multiple data channels are available for each sample. The capability to map to a target signal through multiple source domains achieves a more accurate estimate for the target domain. The results of the experiments performed indicate that in certain circumstances, such as having a limited number of samples, finding the optimal hyper-parameters of a neural network using gradient-free algorithms outperforms traditional gradient-based algorithms, leading to more accurate classification results. In addition, Generative Adversarial Networks could be used to impute the missing data channels in multi-channel bio-signals, and the generated data used for further analysis and classification tasks

    Improvements on the bees algorithm for continuous optimisation problems

    Get PDF
    This work focuses on the improvements of the Bees Algorithm in order to enhance the algorithm’s performance especially in terms of convergence rate. For the first enhancement, a pseudo-gradient Bees Algorithm (PG-BA) compares the fitness as well as the position of previous and current bees so that the best bees in each patch are appropriately guided towards a better search direction after each consecutive cycle. This method eliminates the need to differentiate the objective function which is unlike the typical gradient search method. The improved algorithm is subjected to several numerical benchmark test functions as well as the training of neural network. The results from the experiments are then compared to the standard variant of the Bees Algorithm and other swarm intelligence procedures. The data analysis generally confirmed that the PG-BA is effective at speeding up the convergence time to optimum. Next, an approach to avoid the formation of overlapping patches is proposed. The Patch Overlap Avoidance Bees Algorithm (POA-BA) is designed to avoid redundancy in search area especially if the site is deemed unprofitable. This method is quite similar to Tabu Search (TS) with the POA-BA forbids the exact exploitation of previously visited solutions along with their corresponding neighbourhood. Patches are not allowed to intersect not just in the next generation but also in the current cycle. This reduces the number of patches materialise in the same peak (maximisation) or valley (minimisation) which ensures a thorough search of the problem landscape as bees are distributed around the scaled down area. The same benchmark problems as PG-BA were applied against this modified strategy to a reasonable success. Finally, the Bees Algorithm is revised to have the capability of locating all of the global optimum as well as the substantial local peaks in a single run. These multi-solutions of comparable fitness offers some alternatives for the decision makers to choose from. The patches are formed only if the bees are the fittest from different peaks by using a hill-valley mechanism in this so called Extended Bees Algorithm (EBA). This permits the maintenance of diversified solutions throughout the search process in addition to minimising the chances of getting trap. This version is proven beneficial when tested with numerous multimodal optimisation problems

    Real time tracking using nature-inspired algorithms

    Get PDF
    This thesis investigates the core difficulties in the tracking field of computer vision. The aim is to develop a suitable tuning free optimisation strategy so that a real time tracking could be achieved. The population and multi-solution based approaches have been applied first to analyse the convergence behaviours in the evolutionary test cases. The aim is to identify the core misconceptions in the manner the search characteristics of particles are defined in the literature. A general perception in the scientific community is that the particle based methods are not suitable for the real time applications. This thesis improves the convergence properties of particles by a novel scale free correlation approach. By altering the fundamental definition of a particle and by avoiding the nostalgic operations the tracking was expedited to a rate of 250 FPS. There is a reasonable amount of similarity between the tracking landscapes and the ones generated by three dimensional evolutionary test cases. Several experimental studies are conducted that compares the performances of the novel optimisation to the ones observed with the swarming methods. It is therefore concluded that the modified particle behaviour outclassed the traditional approaches by huge margins in almost every test scenario

    the Bees Algorithm: a novel optimisation tool

    Get PDF
    This work introduces the Bees Algorithm, a new optimisation algorithm inspired by the foraging behaviour of honey-bees. In its basic version, the Bees Algorithm performs a kind of neighbourhood search combined with global random search and can be used for both continuous and discrete optimisation problems. An improved version of the Bees Algorithm including replacing global random search with interpolation and extrapolation, shrinking neighbourhood size, and abandoning sites with no new information was developed. The improved version could solve benchmark function optimisation problems with less sampling of the search space. The Bees Algorithm has been applied to mechanical design optimisation. Two standard mechanical design problems, the design of a welded beam structure and the design of coil springs, were used to benchmark the Bees Algorithm against other optimisation techniques. Computer-aided preliminary design can be regarded as a special case of optimisation, where the goal is to generate as many solutions as possible above a predefined performance threshold. The higher the number of solutions satisfying the preliminary selection criteria, the greater is the chance to produce a good final solution. An adapted version of the Bees Algorithm for discrete function optimisation was developed and tested on a simple machine design task, preliminary gearbox design. The test consists of finding alternative gearbox configurations that approximately produce the required output speeds using one of the available input speeds. Experimental results show that the Bees Algorithm outperforms random search and a genetic optimisation algorithm. A modified version of the Bees Algorithm was used to search for multiple Pareto optimal solutions in a multi-objective optimisation design problem. Compared to two non-dominated genetic algorithms, the Bees Algorithm was able to find more trade-off solutions. Finally, the Bees Algorithm was employed to train Radial Basis Function (RBF) neural networks for two different problems. Despite the high dimensionality of the problems - each bee represented 2345 parameters in the control chart pattern recognition case and 1581 parameters in the wood defect classification case - the algorithm successfully trained very accurate classifiers. Although the accuracies achieved were marginally lower than those obtained with conventional RBF training methods, the total output errors were less than those for conventionally RBF-trained networks with same number of hidden neurons

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Evolving machine learning and deep learning models using evolutionary algorithms

    Get PDF
    Despite the great success in data mining, machine learning and deep learning models are yet subject to material obstacles when tackling real-life challenges, such as feature selection, initialization sensitivity, as well as hyperparameter optimization. The prevalence of these obstacles has severely constrained conventional machine learning and deep learning methods from fulfilling their potentials. In this research, three evolving machine learning and one evolving deep learning models are proposed to eliminate above bottlenecks, i.e. improving model initialization, enhancing feature representation, as well as optimizing model configuration, respectively, through hybridization between the advanced evolutionary algorithms and the conventional ML and DL methods. Specifically, two Firefly Algorithm based evolutionary clustering models are proposed to optimize cluster centroids in K-means and overcome initialization sensitivity as well as local stagnation. Secondly, a Particle Swarm Optimization based evolving feature selection model is developed for automatic identification of the most effective feature subset and reduction of feature dimensionality for tackling classification problems. Lastly, a Grey Wolf Optimizer based evolving Convolutional Neural Network-Long Short-Term Memory method is devised for automatic generation of the optimal topological and learning configurations for Convolutional Neural Network-Long Short-Term Memory networks to undertake multivariate time series prediction problems. Moreover, a variety of tailored search strategies are proposed to eliminate the intrinsic limitations embedded in the search mechanisms of the three employed evolutionary algorithms, i.e. the dictation of the global best signal in Particle Swarm Optimization, the constraint of the diagonal movement in Firefly Algorithm, as well as the acute contraction of search territory in Grey Wolf Optimizer, respectively. The remedy strategies include the diversification of guiding signals, the adaptive nonlinear search parameters, the hybrid position updating mechanisms, as well as the enhancement of population leaders. As such, the enhanced Particle Swarm Optimization, Firefly Algorithm, and Grey Wolf Optimizer variants are more likely to attain global optimality on complex search landscapes embedded in data mining problems, owing to the elevated search diversity as well as the achievement of advanced trade-offs between exploration and exploitation

    Enhancing the bees algorithm using the traplining metaphor

    Get PDF
    This work aims to improve the performance of the Bees Algorithm (BA), particularly in terms of simplicity, accuracy, and convergence. Three improvements were made in this study as a result of bees’ traplining behaviour. The first improvement was the parameter reduction of the Bees Algorithm. This strategy recruits and assigns worker bees to exploit and explore all patches. Both searching processes are assigned using the Triangular Distribution Random Number Generator. The most promising patches have more workers and are subject to more exploitation than the less productive patches. This technique reduced the original parameters into two parameters. The results show that the Bi-BA is just as efficient as the basic BA, although it has fewer parameters. Following that, another improvement was proposed to increase the diversification performance of the Combinatorial Bees Algorithm (CBA). The technique employs a novel constructive heuristic that considers the distance and the turning angle of the bees’ flight. When foraging for honey, bees generally avoid making a sharp turn. By including this turning angle as the second consideration, it can control CBA’s initial solution diversity. Third, the CBA is strengthened to enable an intensification strategy that avoids falling into a local optima trap. The approach is based on the behaviour of bees when confronted with threats. They will keep away from re-visiting those flowers during the next bout for reasons like predators, rivals, or honey run out. The approach will remove temporarily threatened flowers from the whole tour, eliminating the sharp turn, and reintroduces them again to the habitual tour’s nearest edge. The technique could effectively achieve an equilibrium between exploration and exploitation mechanisms. The results show that the strategy is very competitive compared to other population-based nature-inspired algorithms. Finally, the enhanced Bees Algorithms are demonstrated on two real-world engineering problems, namely, Printed Circuit Board insertion sequencing and vehicles routing problem

    Handling Class Imbalance Using Swarm Intelligence Techniques, Hybrid Data and Algorithmic Level Solutions

    Get PDF
    This research focuses mainly on the binary class imbalance problem in data mining. It investigates the use of combined approaches of data and algorithmic level solutions. Moreover, it examines the use of swarm intelligence and population-based techniques to combat the class imbalance problem at all levels, including at the data, algorithmic, and feature level. It also introduces various solutions to the class imbalance problem, in which swarm intelligence techniques like Stochastic Diffusion Search (SDS) and Dispersive Flies Optimisation (DFO) are used. The algorithms were evaluated using experiments on imbalanced datasets, in which the Support Vector Machine (SVM) was used as a classifier. SDS was used to perform informed undersampling of the majority class to balance the dataset. The results indicate that this algorithm improves the classifier performance and can be used on imbalanced datasets. Moreover, SDS was extended further to perform feature selection on high dimensional datasets. Experimental results show that SDS can be used to perform feature selection and improve the classifier performance on imbalanced datasets. Further experiments evaluated DFO as an algorithmic level solution to optimise the SVM kernel parameters when learning from imbalanced datasets. Based on the promising results of DFO in these experiments, the novel approach was extended further to provide a hybrid algorithm that simultaneously optimises the kernel parameters and performs feature selection
    • …
    corecore