2,477 research outputs found

    Data-driven design of intelligent wireless networks: an overview and tutorial

    Get PDF
    Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves

    Survey on Various Aspects of Clustering in Wireless Sensor Networks Employing Classical, Optimization, and Machine Learning Techniques

    Get PDF
    A wide range of academic scholars, engineers, scientific and technology communities are interested in energy utilization of Wireless Sensor Networks (WSNs). Their extensive research is going on in areas like scalability, coverage, energy efficiency, data communication, connection, load balancing, security, reliability and network lifespan. Individual researchers are searching for affordable methods to enhance the solutions to existing problems that show unique techniques, protocols, concepts, and algorithms in the wanted domain. Review studies typically offer complete, simple access or a solution to these problems. Taking into account this motivating factor and the effect of clustering on the decline of energy, this article focuses on clustering techniques using various wireless sensor networks aspects. The important contribution of this paper is to give a succinct overview of clustering

    Data mining based cyber-attack detection

    Get PDF

    Enhanced non-parametric sequence learning scheme for internet of things sensory data in cloud infrastructure

    Get PDF
    The Internet of Things (IoT) Cloud is an emerging technology that enables machine-to-machine, human-to-machine and human-to-human interaction through the Internet. IoT sensor devices tend to generate sensory data known for their dynamic and heterogeneous nature. Hence, it makes it elusive to be managed by the sensor devices due to their limited computation power and storage space. However, the Cloud Infrastructure as a Service (IaaS) leverages the limitations of the IoT devices by making its computation power and storage resources available to execute IoT sensory data. In IoT-Cloud IaaS, resource allocation is the process of distributing optimal resources to execute data request tasks that comprise data filtering operations. Recently, machine learning, non-heuristics, multi-objective and hybrid algorithms have been applied for efficient resource allocation to execute IoT sensory data filtering request tasks in IoT-enabled Cloud IaaS. However, the filtering task is still prone to some challenges. These challenges include global search entrapment of event and error outlier detection as the dimension of the dataset increases in size, the inability of missing data recovery for effective redundant data elimination and local search entrapment that leads to unbalanced workloads on available resources required for task execution. In this thesis, the enhancement of Non-Parametric Sequence Learning (NPSL), Perceptually Important Point (PIP) and Efficient Energy Resource Ranking- Virtual Machine Selection (ERVS) algorithms were proposed. The Non-Parametric Sequence-based Agglomerative Gaussian Mixture Model (NPSAGMM) technique was initially utilized to improve the detection of event and error outliers in the global space as the dimension of the dataset increases in size. Then, Perceptually Important Points K-means-enabled Cosine and Manhattan (PIP-KCM) technique was employed to recover missing data to improve the elimination of duplicate sensed data records. Finally, an Efficient Resource Balance Ranking- based Glow-warm Swarm Optimization (ERBV-GSO) technique was used to resolve the local search entrapment for near-optimal solutions and to reduce workload imbalance on available resources for task execution in the IoT-Cloud IaaS platform. Experiments were carried out using the NetworkX simulator and the results of N-PSAGMM, PIP-KCM and ERBV-GSO techniques with N-PSL, PIP, ERVS and Resource Fragmentation Aware (RF-Aware) algorithms were compared. The experimental results showed that the proposed NPSAGMM, PIP-KCM, and ERBV-GSO techniques produced a tremendous performance improvement rate based on 3.602%/6.74% Precision, 9.724%/8.77% Recall, 5.350%/4.42% Area under Curve for the detection of event and error outliers. Furthermore, the results indicated an improvement rate of 94.273% F1-score, 0.143 Reduction Ratio, and with minimum 0.149% Root Mean Squared Error for redundant data elimination as well as the minimum number of 608 Virtual Machine migrations, 47.62% Resource Utilization and 41.13% load balancing degree for the allocation of desired resources deployed to execute sensory data filtering tasks respectively. Therefore, the proposed techniques have proven to be effective for improving the load balancing of allocating the desired resources to execute efficient outlier (Event and Error) detection and eliminate redundant data records in the IoT-based Cloud IaaS Infrastructure

    Evaluation of Classification Algorithms for Intrusion Detection System: A Review

    Get PDF
    Intrusion detection is one of the most critical network security problems in the technology world. Machine learning techniques are being implemented to improve the Intrusion Detection System (IDS). In order to enhance the performance of IDS, different classification algorithms are applied to detect various types of attacks. Choosing a suitable classification algorithm for building IDS is not an easy task. The best method is to test the performance of the different classification algorithms. This paper aims to present the result of evaluating different classification algorithms to build an IDS model in terms of confusion matrix, accuracy, recall, precision, f-score, specificity and sensitivity. Nevertheless, most researchers have focused on the confusion matrix and accuracy metric as measurements of classification performance. It also provides a detailed comparison with the dataset, data preprocessing, number of features selected, feature selection technique, classification algorithms, and evaluation performance of algorithms described in the intrusion detection system

    Resilient routing mechanism for wireless sensor networks with deep learning link reliability prediction

    Get PDF
    Wireless sensor networks play an important role in Internet of Things systems and services but are prone and vulnerable to poor communication channel quality and network attacks. In this paper we are motivated to propose resilient routing algorithms for wireless sensor networks. The main idea is to exploit the link reliability along with other traditional routing metrics for routing algorithm design. We proposed firstly a novel deep-learning based link prediction model, which jointly exploits Weisfeiler-Lehman kernel and Dual Convolutional Neural Network (WL-DCNN) for lightweight subgraph extraction and labelling. It is leveraged to enhance self-learning ability of mining topological features with strong generality. Experimental results demonstrate that WL-DCNN outperforms all the studied 9 baseline schemes over 6 open complex networks datasets. The performance of AUC (Area Under the receiver operating characteristic Curve) is improved by 16% on average. Furthermore, we apply the WL-DCNN model in the design of resilient routing for wireless sensor networks, which can adaptively capture topological features to determine the reliability of target links, especially under the situations of routing table suffering from attack with varying degrees of damage to local link community. It is observed that, compared with other classical routing baselines, the proposed routing algorithm with link reliability prediction module can effectively improve the resilience of sensor networks while reserving high-energy-efficiency

    Adaptive Boltzmann Medical Dataset Machine Learning

    Get PDF
    The RBM is a stochastic energy-based model of an unsupervised neural network (RBM). RBM is a key pre-training for Deep Learning. Structure of RBM includes weights and coefficients for neurons. Better network structure allows us to examine data more thoroughly, which is good. We looked at the variance of parameters in learning on demand to fix the problem. To determine why RBM's energy function fluctuates, we'll look at its parameter variance. A neuron generation and annihilation algorithm is smeared with an adaptive RBM learning method to determine the optimal number of hidden neurons for attribute imputation during training. When the energy function isn't converged and parameter variance is high, a hidden neuron is generated. If the neuron doesn't disrupt learning, it'll destroy the hidden neuron. In this study, some yardstick PIMA data sets were tested

    A Review on Cybersecurity based on Machine Learning and Deep Learning Algorithms

    Get PDF
    Machin learning (ML) and Deep Learning (DL) technique have been widely applied to areas like image processing and speech recognition so far. Likewise, ML and DL plays a critical role in detecting and preventing in the field of cybersecurity. In this review, we focus on recent ML and DL algorithms that have been proposed in cybersecurity, network intrusion detection, malware detection. We also discuss key elements of cybersecurity, main principle of information security and the most common methods used to threaten cybersecurity. Finally, concluding remarks are discussed including the possible research topics that can be taken into consideration to enhance various cyber security applications using DL and ML algorithms
    corecore