466 research outputs found

    Scalable parallel evolutionary optimisation based on high performance computing

    Get PDF
    Evolutionary algorithms (EAs) have been successfully applied to solve various challenging optimisation problems. Due to their stochastic nature, EAs typically require considerable time to find desirable solutions; especially for increasingly complex and large-scale problems. As a result, many works studied implementing EAs on parallel computing facilities to accelerate the time-consuming processes. Recently, the rapid development of modern parallel computing facilities such as the high performance computing (HPC) bring not only unprecedented computational capabilities but also challenges on designing parallel algorithms. This thesis mainly focuses on designing scalable parallel evolutionary optimisation (SPEO) frameworks which run efficiently on the HPC. Motivated by the interesting phenomenon that many EAs begin to employ increasingly large population sizes, this thesis firstly studies the effect of a large population size through comprehensive experiments. Numerical results indicate that a large population benefits to the solving of complex problems but requires a large number of maximal fitness evaluations (FEs). However, since sequential EAs usually requires a considerable computing time to achieve extensive FEs, we propose a scalable parallel evolutionary optimisation framework that can efficiently deploy parallel EAs over many CPU cores at CPU-only HPC. On the other hand, since EAs using a large number of FEs can produce massive useful information in the course of evolution, we design a surrogate-based approach to learn from this historical information and to better solve complex problems. Then this approach is implemented in parallel based on the proposed scalable parallel framework to achieve remarkable speedups. Since demanding a great computing power on CPU-only HPC is usually very expensive, we design a framework based on GPU-enabled HPC to improve the cost-effectiveness of parallel EAs. The proposed framework can efficiently accelerate parallel EAs using many GPUs and can achieve superior cost-effectiveness. However, since it is very challenging to correctly implement parallel EAs on the GPU, we propose a set of guidelines to verify the correctness of GPU-based EAs. In order to examine these guidelines, they are employed to verify a GPU-based brain storm optimisation that is also proposed in this thesis. In conclusion, the comprehensively experimental study is firstly conducted to investigate the impacts of a large population. After that, a SPEO framework based on CPU-only HPC is proposed and is employed to accelerate a time-consuming implementation of EA. Finally, the correctness verification of implementing EAs based on a single GPU is discussed and the SPEO framework is then extended to be deployed based on GPU-enabled HPC

    Spiky RBN: A Sub-symbolic Artificial Chemistry

    Get PDF
    We design and build a sub-symbolic artificial chemistry based on random boolean networks (RBN). We show the expressive richness of the RBN in terms of system design and the behavioural range of the overall system. This is done by first generating reference sets of RBNs and then comparing their behaviour as we add mass conservation and energetics to the system. The comparison is facilitated by an activity measure based on information theory and reaction graphs but tailored for our system. The system is used to reason about methods of designing complex systems and directing them towards specific tasks

    Bayesian belief networks for dementia diagnosis and other applications: a comparison of hand-crafting and construction using a novel data driven technique

    Get PDF
    The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ``learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ``real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data

    Shipboard electrification : emission reduction and energy control

    Get PDF
    Phd ThesisThe application of green technology to marine transport is high on the sector’s agenda, both for environmental reasons, as well as the potential to positively impact on ship operator running costs. In this thesis, electrical technologies and systems as enablers of green vessels were examined for reducing emissions and fuel consumption in a number of case studies, using computer based models and simulations, coupled with real operational data. Bidirectional auxiliary drives were analysed while providing propulsion during low speed manoeuvring, coupling an electrical machine with power electronic converter and feeding power to the propulsion system from the auxiliary generators. Models were built to enable quantification of losses in various topologies and machine setups, showing how permanent magnet machines compared to induction machines, as well as examining different losses in different topologies. Another examination of topologies was performed for onshore power supply systems, where a number of different network configurations were modelled and examined based on the visiting profile for a particular port. A Particle Swarm Optimisation algorithm was developed to identify optimal configurations considering both capital costs as well as operational efficiency. This was additionally coupled with the consideration of shore-based LNG generation giving a hybrid onshore power supply configuration. Hybrid systems on vessels are more complex in terms of energy management, particularly with on-board energy storage. Particle Swarm Optimisation was applied to a model of a hybrid shipboard power system, optimising continuously for the greenest configuration during the ship’s voyage. This was developed into a generic and scalable Energy Management System, with the objective of minimising fuel consumption, and applied to a case study

    Analysis of physiological signals using machine learning methods

    Get PDF
    Technological advances in data collection enable scientists to suggest novel approaches, such as Machine Learning algorithms, to process and make sense of this information. However, during this process of collection, data loss and damage can occur for reasons such as faulty device sensors or miscommunication. In the context of time-series data such as multi-channel bio-signals, there is a possibility of losing a whole channel. In such cases, existing research suggests imputing the missing parts when the majority of data is available. One way of understanding and classifying complex signals is by using deep neural networks. The hyper-parameters of such models have been optimised using the process of back propagation. Over time, improvements have been suggested to enhance this algorithm. However, an essential drawback of the back propagation can be the sensitivity to noisy data. This thesis proposes two novel approaches to address the missing data challenge and back propagation drawbacks: First, suggesting a gradient-free model in order to discover the optimal hyper-parameters of a deep neural network. The complexity of deep networks and high-dimensional optimisation parameters presents challenges to find a suitable network structure and hyper-parameter configuration. This thesis proposes the use of a minimalist swarm optimiser, Dispersive Flies Optimisation(DFO), to enable the selected model to achieve better results in comparison with the traditional back propagation algorithm in certain conditions such as limited number of training samples. The DFO algorithm offers a robust search process for finding and determining the hyper-parameter configurations. Second, imputing whole missing bio-signals within a multi-channel sample. This approach comprises two experiments, namely the two-signal and five-signal imputation models. The first experiment attempts to implement and evaluate the performance of a model mapping bio-signals from A toB and vice versa. Conceptually, this is an extension to transfer learning using CycleGenerative Adversarial Networks (CycleGANs). The second experiment attempts to suggest a mechanism imputing missing signals in instances where multiple data channels are available for each sample. The capability to map to a target signal through multiple source domains achieves a more accurate estimate for the target domain. The results of the experiments performed indicate that in certain circumstances, such as having a limited number of samples, finding the optimal hyper-parameters of a neural network using gradient-free algorithms outperforms traditional gradient-based algorithms, leading to more accurate classification results. In addition, Generative Adversarial Networks could be used to impute the missing data channels in multi-channel bio-signals, and the generated data used for further analysis and classification tasks

    Ontology Alignment using Biologically-inspired Optimisation Algorithms

    Get PDF
    It is investigated how biologically-inspired optimisation methods can be used to compute alignments between ontologies. Independent of particular similarity metrics, the developed techniques demonstrate anytime behaviour and high scalability. Due to the inherent parallelisability of these population-based algorithms it is possible to exploit dynamically scalable cloud infrastructures - a step towards the provisioning of Alignment-as-a-Service solutions for future semantic applications

    Swarm intelligence and its applications to wireless ad hoc and sensor networks.

    Get PDF
    Swarm intelligence, as inspired by natural biological swarms, has numerous powerful properties for distributed problem solving in complex real world applications such as optimisation and control. Swarm intelligence properties can be found in natural systems such as ants, bees and birds, whereby the collective behaviour of unsophisticated agents interact locally with their environment to explore collective problem solving without centralised control. Recent advances in wireless communication and digital electronics have instigated important changes in distributed computing. Pervasive computing environments have emerged, such as large scale communication networks and wireless ad hoc and sensor networks that are extremely dynamic and unreliable. The network management and control must be based on distributed principles where centralised approaches may not be suitable for exploiting the enormous potential of these environments. In this thesis, we focus on applying swarm intelligence to the wireless ad hoc and sensor networks optimisation and control problems. Firstly, an analysis of the recently proposed particle swarm optimisation, which is based on the swarm intelligence techniques, is presented. Previous stability analysis of the particle swarm optimisation was restricted to the assumption that all of the parameters are non random since the theoretical analysis with the random parameters is difficult. We analyse the stability of the particle dynamics without these restrictive assumptions using Lyapunov stability and passive systems concepts. The particle swarm optimisation is then used to solve the sink node placement problem in sensor networks. Secondly, swarm intelligence based routing methods for mobile ad hoc networks are investigated. Two protocols have been proposed based on the foraging behaviour of biological ants and implemented in the NS2 network simulator. The first protocol allows each node in the network to choose the next node for packets to be forwarded on the basis of mobility influenced routing table. Since mobility is one of the most important factors for route changes in mobile ad hoc networks, the mobility of the neighbour node using HELLO packets is predicted and then translated into a pheromone decay as found in natural biological systems. The second protocol uses the same mechanism as the first, but instead of mobility the neighbour node remaining energy level and its drain rate are used. The thesis clearly shows that swarm intelligence methods have a very useful role to play in the management and control iv problems associated with wireless ad hoc and sensor networks. This thesis has given a number of example applications and has demonstrated its usefulness in improving performance over other existing methods

    The multiple pheromone Ant clustering algorithm

    Get PDF
    Ant Colony Optimisation algorithms mimic the way ants use pheromones for marking paths to important locations. Pheromone traces are followed and reinforced by other ants, but also evaporate over time. As a consequence, optimal paths attract more pheromone, whilst the less useful paths fade away. In the Multiple Pheromone Ant Clustering Algorithm (MPACA), ants detect features of objects represented as nodes within graph space. Each node has one or more ants assigned to each feature. Ants attempt to locate nodes with matching feature values, depositing pheromone traces on the way. This use of multiple pheromone values is a key innovation. Ants record other ant encounters, keeping a record of the features and colony membership of ants. The recorded values determine when ants should combine their features to look for conjunctions and whether they should merge into colonies. This ability to detect and deposit pheromone representative of feature combinations, and the resulting colony formation, renders the algorithm a powerful clustering tool. The MPACA operates as follows: (i) initially each node has ants assigned to each feature; (ii) ants roam the graph space searching for nodes with matching features; (iii) when departing matching nodes, ants deposit pheromones to inform other ants that the path goes to a node with the associated feature values; (iv) ant feature encounters are counted each time an ant arrives at a node; (v) if the feature encounters exceed a threshold value, feature combination occurs; (vi) a similar mechanism is used for colony merging. The model varies from traditional ACO in that: (i) a modified pheromone-driven movement mechanism is used; (ii) ants learn feature combinations and deposit multiple pheromone scents accordingly; (iii) ants merge into colonies, the basis of cluster formation. The MPACA is evaluated over synthetic and real-world datasets and its performance compares favourably with alternative approaches

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis
    • …
    corecore