44 research outputs found

    Identifying Malicious Nodes in Multihop IoT Networks using Dual Link Technologies and Unsupervised Learning

    Get PDF
    Packet manipulation attack is one of the challenging threats in cyber-physical systems (CPSs) and Internet of Things (IoT), where information packets are corrupted during transmission by compromised devices. These attacks consume network resources, result in delays in decision making, and could potentially lead to triggering wrong actions that disrupt an overall system's operation. Such malicious attacks as well as unintentional faults are difficult to locate/identify in a large-scale mesh-like multihop network, which is the typical topology suggested by most IoT standards. In this paper, first, we propose a novel network architecture that utilizes powerful nodes that can support two distinct communication link technologies for identification of malicious networked devices (with typical singlelink technology). Such powerful nodes equipped with dual-link technologies can reveal hidden information within meshed connections that is hard to otherwise detect. By applying machine intelligence at the dual-link nodes, malicious networked devices in an IoT network can be accurately identified. Second, we propose two techniques based on unsupervised machine learning, namely hard detection and soft detection, that enable dual-link nodes to identify malicious networked devices. Our techniques exploit network diversity as well as the statistical information computed by dual-link nodes to identify the trustworthiness of resource-constrained devices. Simulation results show that the detection accuracy of our algorithms is superior to the conventional watchdog scheme, where nodes passively listen to neighboring transmissions to detect corrupted packets. The results also show that as the density of the dual-link nodes increases, the detection accuracy improves and the false alarm rate decreases

    IoT Crawler with Behavior Analyzer at Fog layer for Detecting Malicious Nodes

    Get PDF
    The limitations in terms of power and processing in IoT (Internet of Things) nodes make nodes an easy prey for malicious attacks, thus threatening business and industry. Detecting malicious nodes before they trigger an attack is highly recommended. The paper introduces a special purpose IoT crawler that works as an inspector to catch malicious nodes. This crawler is deployed in the Fog layer to inherit its capabilities, and to be an intermediate connection between the things and the cloud computing nodes. The crawler collects data streams from IoT nodes, upon a priority criterion. A behavior analyzer, with a machine learning core, detects malicious nodes according to the extracted node behavior from the crawler collected data streams. The performance of the behavior analyzer was investigated using three machine learning algorithms: Adaboost, Random forest and Extra tree. The behavior analyzer produces better testing accuracy, for the tested data, when using Extra tree compared to Adaboost and Random forest; it achieved 98.3% testing accuracy with Extra tree

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions

    Recent Advances in Cellular D2D Communications

    Get PDF
    Device-to-device (D2D) communications have attracted a great deal of attention from researchers in recent years. It is a promising technique for offloading local traffic from cellular base stations by allowing local devices, in physical proximity, to communicate directly with each other. Furthermore, through relaying, D2D is also a promising approach to enhancing service coverage at cell edges or in black spots. However, there are many challenges to realizing the full benefits of D2D. For one, minimizing the interference between legacy cellular and D2D users operating in underlay mode is still an active research issue. With the 5th generation (5G) communication systems expected to be the main data carrier for the Internet-of-Things (IoT) paradigm, the potential role of D2D and its scalability to support massive IoT devices and their machine-centric (as opposed to human-centric) communications need to be investigated. New challenges have also arisen from new enabling technologies for D2D communications, such as non-orthogonal multiple access (NOMA) and blockchain technologies, which call for new solutions to be proposed. This edited book presents a collection of ten chapters, including one review and nine original research works on addressing many of the aforementioned challenges and beyond

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Providing Secure and Reliable Communication for Next Generation Networks in Smart Cities

    Get PDF
    Finding a framework that provides continuous, reliable, secure and sustainable diversified smart city services proves to be challenging in today’s traditional cloud centralized solutions. This article envisions a Mobile Edge Computing (MEC) solution that enables node collaboration among IoT devices to provide reliable and secure communication between devices and the fog layer on one hand, and the fog layer and the cloud layer on the other hand. The solution assumes that collaboration is determined based on nodes’ resource capabilities and cooperation willingness. Resource capabilities are defined using ontologies, while willingness to cooperate is described using a three-factor node criteria, namely: nature, attitude and awareness. A learning method is adopted to identify candidates for the service composition and delivery process. We show that the system does not require extensive training for services to be delivered correct and accurate. The proposed solution reduces the amount of unnecessary traffic flow to and from the edge, by relying on nodeto-node communication protocols. Communication to the fog andcloud layers is used for more data and computing-extensive applications, hence, ensuring secure communication protocols to the cloud. Preliminary simulations are conducted to showcase the effectiveness of adapting the proposed framework to achieve smart city sustainability through service reliability and security. Results show that the proposed solution outperforms other semicooperative and non-cooperative service composition techniques in terms of efficient service delivery and composition delay, service hit ratio, and suspicious node identification

    Detecting Distributed Denial of Service attacks using Recurrent Neural Network

    Get PDF
    As the internet grows and diversity, attackers use various attacks to crash the servers and to stop specific sites. Multiple computers and multiple Internet connections are targeted by using distributed denial of service (DDoS) attacks. The aim of this paper is to identify the best algorithm among the selected algorithms (i.e., gradient descent with momentum algorithm, scaled conjugate gradient, and variable learning rate gradient descent algorithm. In this study, the recurrent neural network was trained to check the accuracy and detection of DDoS attacks. The intention of this training was to allow the system to learn and classify the input traffic into the category. The proposed system's training was composed of three separate algorithms utilizing recurrent neural networks. The MATLAB 2018a simulator was used for training purpose. Moreover, clean the Knowledge Discovery Dataset (KDD) during design and include the values of protocols, attacks, and flags. The neural network model was subsequently developed, and the KDD was trained using Artificial Neural Network (ANN). The results of DDoS attacks’ detection were analyzed using MATLAB's ANN toolbox. The success rate of the variable learning rate gradient descent algorithm was 99.9% accuracy and the short timing was 2 minutes and 29 seconds. The variable learning rate gradient descent algorithm gives better results than gradient descent with momentum and scaled conjugate gradient algorithms. In the state of the art, different algorithms have been trained in different neural networks and different KDD datasets by using selective DDoS attacks but in this research recurrent neural network was used for three different algorithms. In this research, we have used total of 22 attacks for detection of DDoS attacks’ accuracy
    corecore