27 research outputs found

    Intelligent Threat-Aware Response System in Software-Defined Networks

    Get PDF
    Software-defined networks decouple the control plane from the data plane, enabling researchers to evaluate protocols and network configurations through the centralized point of control, the controller. They provide easy management and automation, scalability, and flexibility in the traditional computer network. In spite of these advantages, software-defined networks fall prey to various denial-of-service attacks specific to network protocols and applications despite their simplicity. There is a need to implement intelligence in the controller as a countermeasure for not only the various types of denial-of-service attacks but also the increasing sophistication involved in them. In this paper, an intelligent threat-aware response system is proposed for defending against any attack by using reinforcement learning. Reinforcement learning can acquire intelligence for detection and reactive actions through experience with various attacks. This experience is obtained from interactions with the computer network through the controller. With the combination of reinforcement learning and the software-defined networking controller, the goal of the autonomous threat response system can be achieved

    FAULT DETECTION AND ISOLATION FOR WIND TURBINE DYNAMIC SYSTEMS

    Get PDF
    This work presents two fault detection and isolation (FDI) approaches for wind turbine systems (WTS). Firstly, a non-linear mathematical model for wind turbine (WT) dynamics is developed. Based on the developed WTS mathematical model, a robust fault detection observer is designed to estimate system faults, so as to generate residuals. The observer is designed to be robust to system disturbance and sensitive to system faults. A WT blade pitch system fault, a drive-train system gearbox fault and three sensor faults are simulated to the nominal system model, and the designed observer is then to detect these faults when the system is subjected to disturbance. The simulation results showed that the simulated faults are successfully detected. In addition, a neural network (NN) method is proposed for WTS fault detection and isolation. Two radial basis function (RBF) networks are employed in this method. The first NN is used to generate the residual from system input/output data. A second NN is used as a classifier to isolate the faults. The classifier is trained to achieve the following target: the output are all “0”s for no fault case; while the output is “1” if the corresponding fault occurs. The performance of the developed neural network FDI method was evaluated using the simulated three sensor faults. The simulation results demonstrated these faults are successfully detected and isolated by the NN classifier

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Artificial immune systems based committee machine for classification application

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A new adaptive learning Artificial Immune System (AIS) based committee machine is developed in this thesis. The new proposed approach efficiently tackles the general problem of clustering high-dimensional data. In addition, it helps on deriving useful decision and results related to other application domains such classification and prediction. Artificial Immune System (AIS) is a branch of computational intelligence field inspired by the biological immune system, and has gained increasing interest among researchers in the development of immune-based models and techniques to solve diverse complex computational or engineering problems. This work presents some applications of AIS techniques to health problems, and a thorough survey of existing AIS models and algorithms. The main focus of this research is devoted to building an ensemble model integrating different AIS techniques (i.e. Artificial Immune Networks, Clonal Selection, and Negative Selection) for classification applications to achieve better classification results. A new AIS-based ensemble architecture with adaptive learning features is proposed by integrating different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the combination of these techniques. Various techniques related to the design and enhancements of the new adaptive learning architecture are studied, including a neuro-fuzzy based detector and an optimizer using particle swarm optimization method to achieve enhanced classification performance. An evaluation study was conducted to show the performance of the new proposed adaptive learning ensemble and to compare it to alternative combining techniques. Several experiments are presented using different medical datasets for the classification problem and findings and outcomes are discussed. The new adaptive learning architecture improves the accuracy of the ensemble. Moreover, there is an improvement over the existing aggregation techniques. The outcomes, assumptions and limitations of the proposed methods with its implications for further research in this area draw this research to its conclusion

    Advances in independent component analysis and nonnegative matrix factorization

    Get PDF
    A fundamental problem in machine learning research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis (PCA), factor analysis, and projection pursuit. In this thesis, we consider two popular and widely used techniques: independent component analysis (ICA) and nonnegative matrix factorization (NMF). ICA is a statistical method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. Starting from ICA, several methods of estimating the latent structure in different problem settings are derived and presented in this thesis. FastICA as one of most efficient and popular ICA algorithms has been reviewed and discussed. Its local and global convergence and statistical behavior have been further studied. A nonnegative FastICA algorithm is also given in this thesis. Nonnegative matrix factorization is a recently developed technique for finding parts-based, linear representations of non-negative data. It is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a low-dimensional approximation. The non-negativity constraints make the representation purely additive (allowing no subtractions), in contrast to many other linear representations such as principal component analysis and independent component analysis. A literature survey of Nonnegative matrix factorization is given in this thesis, and a novel method called Projective Nonnegative matrix factorization (P-NMF) and its applications are provided

    Text Mining for Big Data Analysis in Financial Sector: A Literature Review

    Get PDF
    Big data technologies have a strong impact on different industries, starting from the last decade, which continues nowadays, with the tendency to become omnipresent. The financial sector, as most of the other sectors, concentrated their operating activities mostly on structured data investigation. However, with the support of big data technologies, information stored in diverse sources of semi-structured and unstructured data could be harvested. Recent research and practice indicate that such information can be interesting for the decision-making process. Questions about how and to what extent research on data mining in the financial sector has developed and which tools are used for these purposes remains largely unexplored. This study aims to answer three research questions: (i) What is the intellectual core of the field? (ii) Which techniques are used in the financial sector for textual mining, especially in the era of the Internet, big data, and social media? (iii) Which data sources are the most often used for text mining in the financial sector, and for which purposes? In order to answer these questions, a qualitative analysis of literature is carried out using a systematic literature review, citation and co-citation analysis

    Control system design using fuzzy gain scheduling of PD with Kalman filter for railway automatic train operation

    Get PDF
    The development of train control systems has progressed towards following the rapid growth of railway transport demands. To further increase the capacity of railway systems, Automatic Train Operation (ATO) systems have been widely adopted in metros and gradually applied to mainline railways to replace drivers in controlling the movement of trains with optimised running trajectories for punctuality and energy saving. Many controller design methods have been studied and applied in ATO systems. However, most researchers paid less attention to measurement noise in the development of ATO control system, whereas such noise indeed exists in every single instrumentation device and disturbs the process output of ATO. Thus, this thesis attempts to address such issues. In order to overcome measurement error, the author develops Fuzzy gain scheduling of PD (proportional and derivative) control assisted by a Kalman filter that is able to maintain the train speed within the specified trajectory and stability criteria in normal and noisy conditions due to measurement noise. Docklands Light Railway (DLR) in London is selected as a case study to implement the proposed idea. The MRes project work is summarised as follows: (1) analysing literature review, (2) modelling the train dynamics mathematically, (3) designing PD controller and Fuzzy gain scheduling, (4) adding a Gaussian white noise as measurement error, (5) implementing a Kalman filter to improve the controllers, (6) examining the entire system in an artificial trajectory and a real case study, i.e. the DLR, and (7) evaluating all based on strict objectives, i.e. a ±3% allowable error limit, a punctuality limit of no later and no earlier than 30 seconds, Integrated Absolute Error (IAE) and Integrated Squared Error (ISE) performances. The results show that Fuzzy gain scheduling of PD control can cope well with the examinations in normal situations. However, such discovery is not found in noisy conditions. Nevertheless, after the introduction to Kalman filter, all control objectives are then satisfied in not only normal but also noisy conditions. The case study implemented using DLR data including on the route from Stratford International to Woolwich Arsenal indicates a satisfactory performance of the designed controller for ATO systems

    Traveling Salesman Problem

    Get PDF
    The idea behind TSP was conceived by Austrian mathematician Karl Menger in mid 1930s who invited the research community to consider a problem from the everyday life from a mathematical point of view. A traveling salesman has to visit exactly once each one of a list of m cities and then return to the home city. He knows the cost of traveling from any city i to any other city j. Thus, which is the tour of least possible cost the salesman can take? In this book the problem of finding algorithmic technique leading to good/optimal solutions for TSP (or for some other strictly related problems) is considered. TSP is a very attractive problem for the research community because it arises as a natural subproblem in many applications concerning the every day life. Indeed, each application, in which an optimal ordering of a number of items has to be chosen in a way that the total cost of a solution is determined by adding up the costs arising from two successively items, can be modelled as a TSP instance. Thus, studying TSP can never be considered as an abstract research with no real importance
    corecore