141 research outputs found

    A deep learning approach for intrusion detection in Internet of Things using bi-directional long short-term memory recurrent neural network

    Get PDF
    Internet-of-Things connects every ‘thing’ with the Internet and allows these ‘things’ to communicate with each other. IoT comprises of innumerous interconnected devices of diverse complexities and trends. This fundamental nature of IoT structure intensifies the amount of attack targets which might affect the sustainable growth of IoT. Thus, security issues become a crucial factor to be addressed. A novel deep learning approach have been proposed in this thesis, for performing real-time detections of security threats in IoT systems using the Bi-directional Long Short-Term Memory Recurrent Neural Network (BLSTM RNN). The proposed approach have been implemented through Google TensorFlow implementation framework and Python programming language. To train and test the proposed approach, UNSW-NB15 dataset has been employed, which is the most up-to-date benchmark dataset with sequential samples and contemporary attack patterns. This thesis work employs binary classification of attack and normal patterns. The experimental result demonstrates the proficiency of the introduced model with respect to recall, precision, FAR and f-1 score. The model attains over 97% detection accuracy. The test result demonstrates that BLSTM RNN is profoundly effective for building highly efficient model for intrusion detection and offers a novel research methodology

    Network Threat Detection Using Machine/Deep Learning in SDN-Based Platforms: A Comprehensive Analysis of State-of-the-Art Solutions, Discussion, Challenges, and Future Research Direction

    Get PDF
    A revolution in network technology has been ushered in by software defined networking (SDN), which makes it possible to control the network from a central location and provides an overview of the network’s security. Despite this, SDN has a single point of failure that increases the risk of potential threats. Network intrusion detection systems (NIDS) prevent intrusions into a network and preserve the network’s integrity, availability, and confidentiality. Much work has been done on NIDS but there are still improvements needed in reducing false alarms and increasing threat detection accuracy. Recently advanced approaches such as deep learning (DL) and machine learning (ML) have been implemented in SDN-based NIDS to overcome the security issues within a network. In the first part of this survey paper, we offer an introduction to the NIDS theory, as well as recent research that has been conducted on the topic. After that, we conduct a thorough analysis of the most recent ML- and DL-based NIDS approaches to ensure reliable identification of potential security risks. Finally, we focus on the opportunities and difficulties that lie ahead for future research on SDN-based ML and DL for NIDS.publishedVersio

    Metric Selection and Metric Learning for Matching Tasks

    Get PDF
    A quarter of a century after the world-wide web was born, we have grown accustomed to having easy access to a wealth of data sets and open-source software. The value of these resources is restricted if they are not properly integrated and maintained. A lot of this work boils down to matching; finding existing records about entities and enriching them with information from a new data source. In the realm of code this means integrating new code snippets into a code base while avoiding duplication. In this thesis, we address two different such matching problems. First, we leverage the diverse and mature set of string similarity measures in an iterative semisupervised learning approach to string matching. It is designed to query a user to make a sequence of decisions on specific cases of string matching. We show that we can find almost optimal solutions after only a small amount of such input. The low labelling complexity of our algorithm is due to addressing the cold start problem that is inherent to Active Learning; by ranking queries by variance before the arrival of enough supervision information, and by a self-regulating mechanism that counteracts initial biases. Second, we address the matching of code fragments for deduplication. Programming code is not only a tool, but also a resource that itself demands maintenance. Code duplication is a frequent problem arising especially from modern development practice. There are many reasons to detect and address code duplicates, for example to keep a clean and maintainable codebase. In such more complex data structures, string similarity measures are inadequate. In their stead, we study a modern supervised Metric Learning approach to model code similarity with Neural Networks. We find that in such a model representing the elementary tokens with a pretrained word embedding is the most important ingredient. Our results show both qualitatively (by visualization) that relatedness is modelled well by the embeddings and quantitatively (by ablation) that the encoded information is useful for the downstream matching task. As a non-technical contribution, we unify the common challenges arising in supervised learning approaches to Record Matching, Code Clone Detection and generic Metric Learning tasks. We give a novel account to string similarity measures from a psychological standpoint and point out and document one longstanding naming conflict in string similarity measures. Finally, we point out the overlap of latest research in Code Clone Detection with the field of Natural Language Processing

    Data Fusion in Internet of Things

    Get PDF
    This dissertation reviews Internet of Things concepts and implementations, state-of-the-art technology with practical examples, as well as data fusion methods applied in different problems. The purpose of this study is to review different data fusion methods and develop a system to provide recognition of human activity that can be applied in day care homes and in hospitals to monitor patients. The system’s objective is to study human activity recognition based on the data recovered by sensors like accelerometers and gyroscopes. In order to transform this data to useful information and practical results to monitoring patients with accuracy and high performance, two different neural networks were implemented. To conclude, the results from the two different neural networks are compared to each other and compared with systems from other authors. It is hoped this study will inform other authors and developers about the performance of neural networks when managing human activity recognition systems

    Overløpskontroll i avløpsnett med forskjellige modelleringsteknikker og internet of things

    Get PDF
    Increased urbanization and extreme rainfall events are causing more frequent instances of sewer overflow, leading to the pollution of water resources and negative environmental, health, and fiscal impacts. At the same time, the treatment capacity of wastewater treatment plants is seriously affected. The main aim of this Ph.D. thesis is to use the Internet of Things and various modeling techniques to investigate the use of real-time control on existing sewer systems to mitigate overflow. The role of the Internet of Things is to provide continuous monitoring and real-time control of sewer systems. Data collected by the Internet of Things are also useful for model development and calibration. Models are useful for various purposes in real-time control, and they can be distinguished as those suitable for simulation and those suitable for prediction. Models that are suitable for a simulation, which describes the important phenomena of a system in a deterministic way, are useful for developing and analyzing different control strategies. Meanwhile, models suitable for prediction are usually employed to predict future system states. They use measurement information about the system and must have a high computational speed. To demonstrate how real-time control can be used to manage sewer systems, a case study was conducted for this thesis in Drammen, Norway. In this study, a hydraulic model was used as a model suitable for simulation to test the feasibility of different control strategies. Considering the recent advances in artificial intelligence and the large amount of data collected through the Internet of Things, the study also explored the possibility of using artificial intelligence as a model suitable for prediction. A summary of the results of this work is presented through five papers. Paper I demonstrates that one mainstream artificial intelligence technique, long short-term memory, can precisely predict the time series data from the Internet of Things. Indeed, the Internet of Things and long short-term memory can be powerful tools for sewer system managers or engineers, who can take advantage of real-time data and predictions to improve decision-making. In Paper II, a hydraulic model and artificial intelligence are used to investigate an optimal in-line storage control strategy that uses the temporal storage volumes in pipes to reduce overflow. Simulation results indicate that during heavy rainfall events, the response behavior of the sewer system differs with respect to location. Overflows at a wastewater treatment plant under different control scenarios were simulated and compared. The results from the hydraulic model show that overflows were reduced dramatically through the intentional control of pipes with in-line storage capacity. To determine available in-line storage capacity, recurrent neural networks were employed to predict the upcoming flow coming into the pipes that were to be controlled. Paper III and Paper IV describe a novel inter-catchment wastewater transfer solution. The inter-catchment wastewater transfer method aims at redistributing spatially mismatched sewer flows by transferring wastewater from a wastewater treatment plant to its neighboring catchment. In Paper III, the hydraulic behaviors of the sewer system under different control scenarios are assessed using the hydraulic model. Based on the simulations, inter-catchment wastewater transfer could efficiently reduce total overflow from a sewer system and wastewater treatment plant. Artificial intelligence was used to predict inflow to the wastewater treatment plant to improve inter-catchment wastewater transfer functioning. The results from Paper IV indicate that inter-catchment wastewater transfer might result in an extra burden for a pump station. To enhance the operation of the pump station, long short-term memory was employed to provide multi-step-ahead water level predictions. Paper V proposes a DeepCSO model based on large and high-resolution sensors and multi-task learning techniques. Experiments demonstrated that the multi-task approach is generally better than single-task approaches. Furthermore, the gated recurrent unit and long short-term memory-based multi-task learning models are especially suitable for capturing the temporal and spatial evolution of combined sewer overflow events and are superior to other methods. The DeepCSO model could help guide the real-time operation of sewer systems at a citywide level.publishedVersio

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure

    A review of deep learning applications for the next generation of cognitive networks

    Get PDF
    Intelligence capabilities will be the cornerstone in the development of next-generation cognitive networks. These capabilities allow them to observe network conditions, learn from them, and then, using prior knowledge gained, respond to its operating environment to optimize network performance. This study aims to offer an overview of the current state of the art related to the use of deep learning in applications for intelligent cognitive networks that can serve as a reference for future initiatives in this field. For this, a systematic literature review was carried out in three databases, and eligible articles were selected that focused on using deep learning to solve challenges presented by current cognitive networks. As a result, 14 articles were analyzed. The results showed that applying algorithms based on deep learning to optimize cognitive data networks has been approached from different perspectives in recent years and in an experimental way to test its technological feasibility. In addition, its implications for solving fundamental challenges in current wireless networks are discussed

    Towards Misleading Connection Mining

    Get PDF
    This study introduces a new Natural Language Generation (NLG) task – Unit Claim Identification. The task aims to extract every piece of verifiable information from a headline. The Unit Claim identification has applications in other domains; such as fact-checking where the identification of each verifiable information from a check-worthy statement can lead to an effective fact-check. Moreover, the extracting of the unit claims from headlines can identify a misleading news article, by mapping evidence from contents. For addressing the unit claim identification problem, we outlined a set of guidelines for data annotation, arranged in-house training for the annotators and obtained a small dataset. We explored two potential approaches - 1) Rule-based approach and 2) Deep learning-based approach and compared their performances. Although the performance of the deep learning-based approach was not very effective due to small number of training instances, the rule-based approach shoa promising result in terms of precision (65.85%)

    A Lite Distributed Semantic Communication System for Internet of Things

    Get PDF
    The rapid development of deep learning (DL) and widespread applications of Internet-of-Things (IoT) have made the devices smarter than before, and enabled them to perform more intelligent tasks. However, it is challenging for any IoT device to train and run DL models independently due to its limited computing capability. In this paper, we consider an IoT network where the cloud/edge platform performs the DL based semantic communication (DeepSC) model training and updating while IoT devices perform data collection and transmission based on the trained model. To make it affordable for IoT devices, we propose a lite distributed semantic communication system based on DL, named L-DeepSC, for text transmission with low complexity, where the data transmission from the IoT devices to the cloud/edge works at the semantic level to improve transmission efficiency. Particularly, by pruning the model redundancy and lowering the weight resolution, the L-DeepSC becomes affordable for IoT devices and the bandwidth required for model weight transmission between IoT devices and the cloud/edge is reduced significantly. Through analyzing the effects of fading channels in forward-propagation and back-propagation during the training of L-DeepSC, we develop a channel state information (CSI) aided training processing to decrease the effects of fading channels on transmission. Meanwhile, we tailor the semantic constellation to make it implementable on capacity-limited IoT devices. Simulation demonstrates that the proposed L-DeepSC achieves competitive performance compared with traditional methods, especially in the low signal-to-noise (SNR) region. In particular, while it can reach as large as 40x compression ratio without performance degradation.Comment: Accpeted by JSA
    • …
    corecore