136 research outputs found

    Hunting IoT Cyberattacks With AI - Powered Intrusion Detection

    Get PDF
    The rapid progression of the Internet of Things allows the seamless integration of cyber and physical environments, thus creating an overall hyper-connected ecosystem. It is evident that this new reality provides several capabilities and benefits, such as real-time decision-making and increased efficiency and productivity. However, it also raises crucial cybersecurity issues that can lead to disastrous consequences due to the vulnerable nature of the Internet model and the new cyber risks originating from the multiple and heterogeneous technologies involved in the loT. Therefore, intrusion detection and prevention are valuable and necessary mechanisms in the arsenal of the loT security. In light of the aforementioned remarks, in this paper, we introduce an Artificial Intelligence (AI)-powered Intrusion Detection and Prevention System (IDPS) that can detect and mitigate potential loT cyberattacks. For the detection process, Deep Neural Networks (DNNs) are used, while Software Defined Networking (SDN) and Q-Learning are combined for the mitigation procedure. The evaluation analysis demonstrates the detection efficiency of the proposed IDPS, while Q- Learning converges successfully in terms of selecting the appropriate mitigation action

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    MPLS & QoS in Virtual Environments

    Get PDF
    The rise of high performance computing has seen a shift of services from locally managed Data Centers, to centralized globally redundant Data Centers (Cloud Computing). The scale of operation and churn required for cloud computing has in turn led to the rise of faster and programmable network pathing, via SDN & NFV. Cloud compute resources are accessible to individual researchers, as well as larger organizations. Cloud computing relies heavily on virtualization and abstraction of resources. The interconnect between these resources is more complex than ever, due to the need to seamlessly move from virtual to physical to hybrid networks and resources. MPLS as a technology is robust and has been used as transport for decades with a good track record. QoS has been available within most protocols to ensure service levels are maintained. The integration of MPLS, QoS and virtual environments is a space of increasing interest. It would allow for the seamless movement of traffic from end to end without the need for specialized hardware or vendor lock-in. In this thesis, the performance gains of IP/MPLS networks utilizing QoS on commercially available virtual environments has been investigated and studied. Latency was captured via round trip time metrics and tabulated for voice, video and data, with QoS and congestion as the primary differentiators. The study discusses the approach taken, the common thinking, and finally analyzes the results of a simulation, in order to show that MPLS & QoS benefits are viable in virtualized environments

    Control Plane in Software Defined Networks and Stateful Data Planes

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    An LSTM-based Network Slicing Classification Future Predictive Framework for Optimized Resource Allocation in C-V2X

    Get PDF
    With the advent of 5G communication networks, many novel areas of research have emerged and the spectrum of communicating objects has been diversified. Network Function Virtualization (NFV), and Software Defined Networking (SDN), are the two broader areas that are tremendously being explored to optimize the network performance parameters. Cellular Vehicle-to-Everything (C-V2X) is one such example of where end-to-end communication is developed with the aid of intervening network slices. Adoption of these technologies enables a shift towards Ultra-Reliable Low-Latency Communication (URLLC) across various domains including autonomous vehicles that demand a hundred percent Quality of Service (QoS) and extremely low latency rates. Due to the limitation of resources to ensure such communication requirements, telecom operators are profoundly researching software solutions for network resource allocation optimally. The concept of Network Slicing (NS) emerged from such end-to-end network resource allocation where connecting devices are routed toward the suitable resources to meet their requirements. Nevertheless, the bias, in terms of finding the best slice, observed in the network slices renders a non-optimal distribution of resources. To cater to such issues, a Deep Learning approach has been developed in this paper. The incoming traffic has been allocated network slices based on data-driven decisions as well as predictive network analysis for the future. A Long Short Term Memory (LSTM) time series prediction approach has been adopted that renders optimal resource utilization, lower latency rates, and high reliability across the network. The model will further ensure packet prioritization and will retain resource margin for crucial ones
    • …
    corecore