538 research outputs found

    Abnormal traffic detection system in SDN based on deep learning hybrid models

    Full text link
    Software defined network (SDN) provides technical support for network construction in smart cities, However, the openness of SDN is also prone to more network attacks. Traditional abnormal traffic detection methods have complex algorithms and find it difficult to detect abnormalities in the network promptly, which cannot meet the demand for abnormal detection in the SDN environment. Therefore, we propose an abnormal traffic detection system based on deep learning hybrid model. The system adopts a hierarchical detection technique, which first achieves rough detection of abnormal traffic based on port information. Then it uses wavelet transform and deep learning techniques for fine detection of all traffic data flowing through suspicious switches. The experimental results show that the proposed detection method based on port information can quickly complete the approximate localization of the source of abnormal traffic. the accuracy, precision, and recall of the fine detection are significantly improved compared with the traditional method of abnormal traffic detection in SDN

    Call Center Capacity Planning

    Get PDF

    Tools for efficient Deep Learning

    Get PDF
    In the era of Deep Learning (DL), there is a fast-growing demand for building and deploying Deep Neural Networks (DNNs) on various platforms. This thesis proposes five tools to address the challenges for designing DNNs that are efficient in time, in resources and in power consumption. We first present Aegis and SPGC to address the challenges in improving the memory efficiency of DL training and inference. Aegis makes mixed precision training (MPT) stabler by layer-wise gradient scaling. Empirical experiments show that Aegis can improve MPT accuracy by at most 4\%. SPGC focuses on structured pruning: replacing standard convolution with group convolution (GConv) to avoid irregular sparsity. SPGC formulates GConv pruning as a channel permutation problem and proposes a novel heuristic polynomial-time algorithm. Common DNNs pruned by SPGC have maximally 1\% higher accuracy than prior work. This thesis also addresses the challenges lying in the gap between DNN descriptions and executables by Polygeist for software and POLSCA for hardware. Many novel techniques, e.g. statement splitting and memory partitioning, are explored and used to expand polyhedral optimisation. Polygeist can speed up software execution in sequential and parallel by 2.53 and 9.47 times on Polybench/C. POLSCA achieves 1.5 times speedup over hardware designs directly generated from high-level synthesis on Polybench/C. Moreover, this thesis presents Deacon, a framework that generates FPGA-based DNN accelerators of streaming architectures with advanced pipelining techniques to address the challenges from heterogeneous convolution and residual connections. Deacon provides fine-grained pipelining, graph-level optimisation, and heuristic exploration by graph colouring. Compared with prior designs, Deacon shows resource/power consumption efficiency improvement of 1.2x/3.5x for MobileNets and 1.0x/2.8x for SqueezeNets. All these tools are open source, some of which have already gained public engagement. We believe they can make efficient deep learning applications easier to build and deploy.Open Acces

    Teletraffic engineering and network planning

    Get PDF

    Neuromorphic deep convolutional neural network learning systems for FPGA in real time

    Get PDF
    Deep Learning algorithms have become one of the best approaches for pattern recognition in several fields, including computer vision, speech recognition, natural language processing, and audio recognition, among others. In image vision, convolutional neural networks stand out, due to their relatively simple supervised training and their efficiency extracting features from a scene. Nowadays, there exist several implementations of convolutional neural networks accelerators that manage to perform these networks in real time. However, the number of operations and power consumption of these implementations can be reduced using a different processing paradigm as neuromorphic engineering. Neuromorphic engineering field studies the behavior of biological and inner systems of the human neural processing with the purpose of design analog, digital or mixed-signal systems to solve problems inspired in how human brain performs complex tasks, replicating the behavior and properties of biological neurons. Neuromorphic engineering tries to give an answer to how our brain is capable to learn and perform complex tasks with high efficiency under the paradigm of spike-based computation. This thesis explores both frame-based and spike-based processing paradigms for the development of hardware architectures for visual pattern recognition based on convolutional neural networks. In this work, two FPGA implementations of convolutional neural networks accelerator architectures for frame-based using OpenCL and SoC technologies are presented. Followed by a novel neuromorphic convolution processor for spike-based processing paradigm, which implements the same behaviour of leaky integrate-and-fire neuron model. Furthermore, it reads the data in rows being able to perform multiple layers in the same chip. Finally, a novel FPGA implementation of Hierarchy of Time Surfaces algorithm and a new memory model for spike-based systems are proposed

    End-to-end anomaly detection in stream data

    Get PDF
    Nowadays, huge volumes of data are generated with increasing velocity through various systems, applications, and activities. This increases the demand for stream and time series analysis to react to changing conditions in real-time for enhanced efficiency and quality of service delivery as well as upgraded safety and security in private and public sectors. Despite its very rich history, time series anomaly detection is still one of the vital topics in machine learning research and is receiving increasing attention. Identifying hidden patterns and selecting an appropriate model that fits the observed data well and also carries over to unobserved data is not a trivial task. Due to the increasing diversity of data sources and associated stochastic processes, this pivotal data analysis topic is loaded with various challenges like complex latent patterns, concept drift, and overfitting that may mislead the model and cause a high false alarm rate. Handling these challenges leads the advanced anomaly detection methods to develop sophisticated decision logic, which turns them into mysterious and inexplicable black-boxes. Contrary to this trend, end-users expect transparency and verifiability to trust a model and the outcomes it produces. Also, pointing the users to the most anomalous/malicious areas of time series and causal features could save them time, energy, and money. For the mentioned reasons, this thesis is addressing the crucial challenges in an end-to-end pipeline of stream-based anomaly detection through the three essential phases of behavior prediction, inference, and interpretation. The first step is focused on devising a time series model that leads to high average accuracy as well as small error deviation. On this basis, we propose higher-quality anomaly detection and scoring techniques that utilize the related contexts to reclassify the observations and post-pruning the unjustified events. Last but not least, we make the predictive process transparent and verifiable by providing meaningful reasoning behind its generated results based on the understandable concepts by a human. The provided insight can pinpoint the anomalous regions of time series and explain why the current status of a system has been flagged as anomalous. Stream-based anomaly detection research is a principal area of innovation to support our economy, security, and even the safety and health of societies worldwide. We believe our proposed analysis techniques can contribute to building a situational awareness platform and open new perspectives in a variety of domains like cybersecurity, and health

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Overløpskontroll i avløpsnett med forskjellige modelleringsteknikker og internet of things

    Get PDF
    Increased urbanization and extreme rainfall events are causing more frequent instances of sewer overflow, leading to the pollution of water resources and negative environmental, health, and fiscal impacts. At the same time, the treatment capacity of wastewater treatment plants is seriously affected. The main aim of this Ph.D. thesis is to use the Internet of Things and various modeling techniques to investigate the use of real-time control on existing sewer systems to mitigate overflow. The role of the Internet of Things is to provide continuous monitoring and real-time control of sewer systems. Data collected by the Internet of Things are also useful for model development and calibration. Models are useful for various purposes in real-time control, and they can be distinguished as those suitable for simulation and those suitable for prediction. Models that are suitable for a simulation, which describes the important phenomena of a system in a deterministic way, are useful for developing and analyzing different control strategies. Meanwhile, models suitable for prediction are usually employed to predict future system states. They use measurement information about the system and must have a high computational speed. To demonstrate how real-time control can be used to manage sewer systems, a case study was conducted for this thesis in Drammen, Norway. In this study, a hydraulic model was used as a model suitable for simulation to test the feasibility of different control strategies. Considering the recent advances in artificial intelligence and the large amount of data collected through the Internet of Things, the study also explored the possibility of using artificial intelligence as a model suitable for prediction. A summary of the results of this work is presented through five papers. Paper I demonstrates that one mainstream artificial intelligence technique, long short-term memory, can precisely predict the time series data from the Internet of Things. Indeed, the Internet of Things and long short-term memory can be powerful tools for sewer system managers or engineers, who can take advantage of real-time data and predictions to improve decision-making. In Paper II, a hydraulic model and artificial intelligence are used to investigate an optimal in-line storage control strategy that uses the temporal storage volumes in pipes to reduce overflow. Simulation results indicate that during heavy rainfall events, the response behavior of the sewer system differs with respect to location. Overflows at a wastewater treatment plant under different control scenarios were simulated and compared. The results from the hydraulic model show that overflows were reduced dramatically through the intentional control of pipes with in-line storage capacity. To determine available in-line storage capacity, recurrent neural networks were employed to predict the upcoming flow coming into the pipes that were to be controlled. Paper III and Paper IV describe a novel inter-catchment wastewater transfer solution. The inter-catchment wastewater transfer method aims at redistributing spatially mismatched sewer flows by transferring wastewater from a wastewater treatment plant to its neighboring catchment. In Paper III, the hydraulic behaviors of the sewer system under different control scenarios are assessed using the hydraulic model. Based on the simulations, inter-catchment wastewater transfer could efficiently reduce total overflow from a sewer system and wastewater treatment plant. Artificial intelligence was used to predict inflow to the wastewater treatment plant to improve inter-catchment wastewater transfer functioning. The results from Paper IV indicate that inter-catchment wastewater transfer might result in an extra burden for a pump station. To enhance the operation of the pump station, long short-term memory was employed to provide multi-step-ahead water level predictions. Paper V proposes a DeepCSO model based on large and high-resolution sensors and multi-task learning techniques. Experiments demonstrated that the multi-task approach is generally better than single-task approaches. Furthermore, the gated recurrent unit and long short-term memory-based multi-task learning models are especially suitable for capturing the temporal and spatial evolution of combined sewer overflow events and are superior to other methods. The DeepCSO model could help guide the real-time operation of sewer systems at a citywide level.publishedVersio
    corecore