4,746 research outputs found

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Survey Paper Artificial and Computational Intelligence in the Internet of Things and Wireless Sensor Network

    Get PDF
    In this modern age, Internet of Things (IoT) and Wireless Sensor Network (WSN) as its derivatives have become one of the most popular and important technological advancements. In IoT, all things and services in the real world are digitalized and it continues to grow exponentially every year. This growth in number of IoT device in the end has created a tremendous amount of data and new data services such as big data systems. These new technologies can be managed to produce additional value to the existing business model. It also can provide a forecasting service and is capable to produce decision-making support using computational intelligence methods. In this survey paper, we provide detailed research activities concerning Computational Intelligence methods application in IoT WSN. To build a good understanding, in this paper we also present various challenges and issues for Computational Intelligence in IoT WSN. In the last presentation, we discuss the future direction of Computational Intelligence applications in IoT WSN such as Self-Organizing Network (dynamic network) concept

    A Review on Various Methods of Intrusion Detection System

    Get PDF
    Detection of Intrusion is an essential expertise business segment as well as a dynamic area of study and expansion caused by its requirement. Modern day intrusion detection systems still have these limitations of time sensitivity. The main requirement is to develop a system which is able of handling large volume of network data to detect attacks more accurately and proactively. Research conducted by on the KDDCUP99 dataset resulted in a various set of attributes for each of the four major attack types. Without reducing the number of features, detecting attack patterns within the data is more difficult for rule generation, forecasting, or classification. The goal of this research is to present a new method that Compare results of appropriately categorized and inaccurately categorized as proportions and the features chosen. Data mining is used to clean, classify and examine large amount of network data. Since a large volume of network traffic that requires processing, we use data mining techniques. Different Data Mining techniques such as clustering, classification and association rules are proving to be useful for analyzing network traffic. This paper presents the survey on data mining techniques applied on intrusion detection systems for the effective identification of both known and unknown patterns of attacks, thereby helping the users to develop secure information systems. Keywords: IDS, Data Mining, Machine Learning, Clustering, Classification DOI: 10.7176/CEIS/11-1-02 Publication date: January 31st 2020

    Optimal Coverage in Wireless Sensor Network using Augmented Nature-Inspired Algorithm

    Get PDF
               One of the difficult problems that must be carefully considered before any network configuration is getting the best possible network coverage. The amount of redundant information that is sensed is decreased due to optimal network coverage, which also reduces the restricted energy consumption of battery-powered sensors. WSN sensors can sense, receive, and send data concurrently. Along with the energy limitation, accurate sensors and non-redundant data are a crucial challenge for WSNs. To maximize the ideal coverage and reduce the waste of the constrained sensor battery lifespan, all these actions must be accomplished. Augmented Nature-inspired algorithm is showing promise as a solution to the crucial problems in “Wireless Sensor Networks” (WSNs), particularly those related to the reduced sensor lifetime. For “Wireless Sensor Networks” (WSNs) to provide the best coverage, we focus on algorithms that are inspired by Augmented Nature in this research. In wireless sensor networks, the cluster head is chosen using the Diversity-Driven Multi-Parent Evolutionary Algorithm. For Data encryption Improved Identity Based Encryption (IIBE) is used.  For centralized optimization and reducing coverage gaps in WSNs Time variant Particle Swarm Optimization (PSO) is used. The suggested model's metrics are examined and compared to various traditional algorithms. This model solves the reduced sensor lifetime and redundant information in Wireless Sensor Networks (WSNs) as well as will give real and effective optimum coverage to the Wireless Sensor Networks (WSNs)

    Hybrid Swarm Intelligence Energy Efficient Clustered Routing Algorithm for Wireless Sensor Networks

    Get PDF
    Currently, wireless sensor networks (WSNs) are used in many applications, namely, environment monitoring, disaster management, industrial automation, and medical electronics. Sensor nodes carry many limitations like low battery life, small memory space, and limited computing capability. To create a wireless sensor network more energy efficient, swarm intelligence technique has been applied to resolve many optimization issues in WSNs. In many existing clustering techniques an artificial bee colony (ABC) algorithm is utilized to collect information from the field periodically. Nevertheless, in the event based applications, an ant colony optimization (ACO) is a good solution to enhance the network lifespan. In this paper, we combine both algorithms (i.e., ABC and ACO) and propose a new hybrid ABCACO algorithm to solve a Nondeterministic Polynomial (NP) hard and finite problem of WSNs. ABCACO algorithm is divided into three main parts: (i) selection of optimal number of subregions and further subregion parts, (ii) cluster head selection using ABC algorithm, and (iii) efficient data transmission using ACO algorithm. We use a hierarchical clustering technique for data transmission; the data is transmitted from member nodes to the subcluster heads and then from subcluster heads to the elected cluster heads based on some threshold value. Cluster heads use an ACO algorithm to discover the best route for data transmission to the base station (BS). The proposed approach is very useful in designing the framework for forest fire detection and monitoring. The simulation results show that the ABCACO algorithm enhances the stability period by 60% and also improves the goodput by 31% against LEACH and WSNCABC, respectively

    Neural network and genetic algorithm techniques for energy efficient relay node placement in smart grid

    Get PDF
    Smart grid (SG) is an intelligent combination of computer science and electricity system whose main characteristics are measurement and real-time monitoring for utility and consumer behavior. SG is made of three main parts: Home Area Network (HAN), Field Area Network (FAN) and Wide Area Network (WAN). There are several techniques used for monitoring SG such as fiber optic but very costly and difficult to maintain. One of the ways to solve the monitoring problem is use of Wireless Sensor Network (WSN). WSN is widely researched because of its easy deployment, low maintenance requirements, small hardware and low costs. However, SG is a harsh environment with high level of magnetic field and background noise and deploying WSN in this area is challenging since it has a direct effect on WSN link quality. An optimal relay node placement which has not yet worked in a smart grid can improve the link quality significantly. To solve the link quality problem and achieve optimum relay node placement, network life-time must be calculated because a longer life-time indicates better relay placement. To calculate this life-time, it is necessary to estimate packet reception rate (PRR). In this research, to achieve optimal relay node placement, firstly, a mathematical formula to measure link quality of the network in smart grid environment is proposed. Secondly, an algorithm based on neural network to estimate the network life-time has been developed. Thirdly, an algorithm based on genetic algorithm for efcient positioning of relay nodes under different conditions to increase the life-time of neural network has also been developed. Results from simulation showed that life-time prediction of neural network has a 91% accuracy. In addition, there was an 85% improvement of life-time compared to binary integer linear programming and weight binary integer linear programming. The research has shown that relay node placement based on the developed genetic algorithms have increased the network life-time, addressed the link quality problem and achieved optimum relay node placement

    Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications

    Get PDF
    Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead
    • …
    corecore