261 research outputs found

    Improved hybrid teaching learning based optimization-jaya and support vector machine for intrusion detection systems

    Get PDF
    Most of the currently existing intrusion detection systems (IDS) use machine learning algorithms to detect network intrusion. Machine learning algorithms have widely been adopted recently to enhance the performance of IDSs. While the effectiveness of some machine learning algorithms in detecting certain types of network intrusion has been ascertained, the situation remains that no single method currently exists that can achieve consistent results when employed for the detection of multiple attack types. Hence, the detection of network attacks on computer systems has remain a relevant field of research for some time. The support vector machine (SVM) is one of the most powerful machine learning algorithms with excellent learning performance characteristics. However, SVM suffers from many problems, such as high rates of false positive alerts, as well as low detection rates of rare but dangerous attacks that affects its performance; feature selection and parameters optimization are important operations needed to increase the performance of SVM. The aim of this work is to develop an improved optimization method for IDS that can be efficient and effective in subset feature selection and parameters optimization. To achieve this goal, an improved Teaching Learning-Based Optimization (ITLBO) algorithm was proposed in dealing with subset feature selection. Meanwhile, an improved parallel Jaya (IPJAYA) algorithm was proposed for searching the best parameters (C, Gama) values of SVM. Hence, a hybrid classifier called ITLBO-IPJAYA-SVM was developed in this work for the improvement of the efficiency of network intrusion on data sets that contain multiple types of attacks. The performance of the proposed approach was evaluated on NSL-KDD and CICIDS intrusion detection datasets and from the results, the proposed approaches exhibited excellent performance in the processing of large datasets. The results also showed that SVM optimization algorithm achieved accuracy values of 0.9823 for NSL-KDD dataset and 0.9817 for CICIDS dataset, which were higher than the accuracy of most of the existing paradigms for classifying network intrusion detection datasets. In conclusion, this work has presented an improved optimization algorithm that can improve the accuracy of IDSs in the detection of various types of network attack

    Structural damage identification using improved Jaya algorithm based on sparse regularization and Bayesian inference

    Get PDF
    Structural damage identification can be considered as an optimization problem, by defining an appropriate objective function relevant to structural parameters to be identified with optimization techniques. This paper proposes a new heuristic algorithm, named improved Jaya (I-Jaya) algorithm, for structural damage identification with the modified objective function based on sparse regularization and Bayesian inference. To improve the global optimization capacity and robustness of the original Jaya algorithm, a clustering strategy is employed to replace solutions with low-quality objective values and a new updated equation is used for the best-so-far solution. The objective function that is sensitive and robust for effective and reliable damage identification is developed through sparse regularization and Bayesian inference and used for optimization analysis with the proposed I-Jaya algorithm. Benchmark tests are conducted to verify the improvement in the developed algorithm. Numerical studies on a truss structure and experimental validations on an experimental reinforced concrete bridge model are performed to verify the developed approach. A limited quantity of modal data, which is distinctively less than the number of unknown system parameters, are used for structural damage identification. Significant measurement noise effect and modelling errors are considered. Damage identification results demonstrate that the proposed method based on the I-Jaya algorithm and the modified objective function based on sparse regularization and Bayesian inference can provide accurate and reliable damage identification, indicating the proposed method is a promising approach for structural damage detection using data with significant uncertainties and limited measurement information

    High frequency signal injection method for sensorless permanent magnet synchronous motor drives

    Get PDF
    The objective of this project is to design a high frequency signal injection method for sensorless control of permanent magnet synchronous motor (PMSM) drives. Generally, the PMSM drives control requires the appearance of speed and positon sensor to measure the motor speed hence to feedback the information for variable speed drives operation. The usage of the sensor will increase the size, cost, extra hardwire and feedback devices. Therefore, there is motivation to eliminate this type of sensor by injecting high frequency signal and utilizing the electrical parameter from the motor so that the speed and positon of rotor can be estimated. The proposed position and speed sensorless control method using high frequency signal injection together with all the power electronic circuit are modelled using Simulink. PMSM sensorless driveis simulated and the results are analyzed in terms of speed, torque and stator current response without load disturbance but under the specification of varying speed, forward to reverse operation, reverse to forward operation and step change in reference speed. The results show that the signal injection method performs well during start-up and low speed operation

    Smouldering combustion of peat in wildfires: Inverse modelling of the drying and the thermal and oxidative decomposition kinetics

    Get PDF
    AbstractSmouldering combustion is the driving phenomenon of wildfire in peatlands, like those causing haze episodes in Southeast Asia and Northeast Europe. These are the largest fires on Earth and an extensive source of greenhouse gases, but poorly understood, becoming an emerging research topic in climate-change mitigation. In this work, a series of multistep heterogeneous kinetics are investigated to describe the drying and decomposition in smouldering combustion of peat. The decomposition schemes cover a range of complexity, including 2, 3 or 4-step schemes, and up to 4 solid pseudo-species. The schemes aim to describe the simultaneous pyrolysis and oxidation reactions in smouldering fires. The reaction rates are expressed by Arrhenius law, and a lumped model of mass loss is used to simulate the degradation behaviour seen during thermogravimetric (TG) experiments in both nitrogen and air atmospheres. A genetic algorithm is applied to solve the corresponding inverse problem using TG data from the literature, and find the best kinetic and stoichiometric parameters for four types of boreal peat from different geographical locations (North China, Scotland and Siberia). The results show that at the TG level, all proposed schemes seem to perform well, with a high degree of agreement resulting from the forced optimization in the inverse problem approach. The chemical validity of the schemes is then investigated outside the TG realm and incorporated into a 1-D plug-flow model to study the reaction and the species distribution inside a peat smouldering front. Both lateral and in-depth spread modes are considered. The results show that the drying sub-front is essential, and that the best kinetics is the 4-step decomposition (one pyrolysis, and three oxidations) plus 1-step drying with 5 condensed species (water, peat, ι-char, β-char, and ash). This is the first time that the smouldering kinetics and the reaction-zone structure of a peat fire are explained and predicted, thus helping to understand this important natural and widespread phenomenon

    How can blockchain make the food supply chain more sustainable? A case study of Norwegian fishing supply chain

    Get PDF

    FinBook: literary content as digital commodity

    Get PDF
    This short essay explains the significance of the FinBook intervention, and invites the reader to participate. We have associated each chapter within this book with a financial robot (FinBot), and created a market whereby book content will be traded with financial securities. As human labour increasingly consists of unstable and uncertain work practices and as algorithms replace people on the virtual trading floors of the worlds markets, we see members of society taking advantage of FinBots to invest and make extra funds. Bots of all kinds are making financial decisions for us, searching online on our behalf to help us invest, to consume products and services. Our contribution to this compilation is to turn the collection of chapters in this book into a dynamic investment portfolio, and thereby play out what might happen to the process of buying and consuming literature in the not-so-distant future. By attaching identities (through QR codes) to each chapter, we create a market in which the chapter can ‘perform’. Our FinBots will trade based on features extracted from the authors’ words in this book: the political, ethical and cultural values embedded in the work, and the extent to which the FinBots share authors’ concerns; and the performance of chapters amongst those human and non-human actors that make up the market, and readership. In short, the FinBook model turns our work and the work of our co-authors into an investment portfolio, mediated by the market and the attention of readers. By creating a digital economy specifically around the content of online texts, our chapter and the FinBook platform aims to challenge the reader to consider how their personal values align them with individual articles, and how these become contested as they perform different value judgements about the financial performance of each chapter and the book as a whole. At the same time, by introducing ‘autonomous’ trading bots, we also explore the different ‘network’ affordances that differ between paper based books that’s scarcity is developed through analogue form, and digital forms of books whose uniqueness is reached through encryption. We thereby speak to wider questions about the conditions of an aggressive market in which algorithms subject cultural and intellectual items – books – to economic parameters, and the increasing ubiquity of data bots as actors in our social, political, economic and cultural lives. We understand that our marketization of literature may be an uncomfortable juxtaposition against the conventionally-imagined way a book is created, enjoyed and shared: it is intended to be
    • …
    corecore