103 research outputs found

    Improving Data Transmission Rate with Self Healing Activation Model for Intrusion Detection with Enhanced Quality of Service

    Get PDF
    Several types of attacks can easily compromise a Wireless Sensor Network (WSN). Although not all intrusions can be predicted, they may cause significant damage to the network and its nodes before being discovered. Due to its explosive growth and the infinite scope in terms of applications and processing brought about by 5G, WSN is becoming more and more deeply embedded in daily life. Security breaches, downed services, faulty hardware, and buggy software can all cripple these enormous systems. As a result, the platform becomes unmaintainable when there are a million or more interconnected devices. When it comes to network security, intrusion detection technology plays a crucial role, with its primary function being to constantly monitor the health of a network and, if any aberrant behavior is detected, to issue a timely warning to network administrators. The current network's availability and dependability are directly tied to the efficacy and timeliness of the Intrusion Detection System (IDS). An Intrusion-Tolerant system would incorporate self-healing mechanisms to restore compromised data. System attributes such as readiness for accurate service, supply identical and correct data, confidentiality, and availability are necessary for a system to merit trust. In this research, self-healing methods are considered that can detect intrusions and can remove with intellectual strategies that can make a system fully autonomous and fix any problems it encounters. In this study, a new architecture for an Intrusion Tolerant Self Healing Activation Model for Improved Data Transmission Rate (ITSHAM-IDTR) is proposed for accurate detection of intrusions and self repairing the network for better performance, which boosts the server's performance quality and enables it to mend itself without any intervention from the administrator. When compared to the existing paradigm, the proposed model performs in both self-healing and increased data transmission rates.

    Network anomaly detection research: a survey

    Get PDF
    Data analysis to identifying attacks/anomalies is a crucial task in anomaly detection and network anomaly detection itself is an important issue in network security. Researchers have developed methods and algorithms for the improvement of the anomaly detection system. At the same time, survey papers on anomaly detection researches are available. Nevertheless, this paper attempts to analyze futher and to provide alternative taxonomy on anomaly detection researches focusing on methods, types of anomalies, data repositories, outlier identity and the most used data type. In addition, this paper summarizes information on application network categories of the existing studies

    Outlier detection in wireless sensor network based on time series approach

    Get PDF
    Sensory data inWireless Sensor Network (WSN) is not always reliable because of open environmental factors such as noise, weak received signal strength or intrusion attacks. The process of detecting highly noisy data and noisy sensor node is called outlier detection. Outlier detection is one of the fundamental tasks of time series analysis that relates to predictive modeling, cluster analysis and association analysis. It has been widely researched in various disciplines besides WSN. The challenge of noise detection in WSN is when it has to be done inside a sensor with limited computational and communication capabilities. Furthermore, there are only a few outlier detection techniques in WSNs and there are no algorithms to detect outliers on real data with high level of accuracy locally and select the most effective neighbors for collaborative detection globally. Hence, this research designed a local and global time series outlier detection in WSN. The Local Outlier Detection Algorithm (LODA) as a decentralized noise detection algorithm runs on each sensor node by identifying intrinsic features, determining the memory size of data histogram to accomplish effective available memory, and making classification for predicting outlier data was developed. Next, the Global Outlier Detection Algorithm (GODA)was developed using adaptive Gray Coding and Entropy techniques for best neighbor selection for spatial correlation amongst sensor nodes. Beside GODA also adopts Adaptive Random Forest algorithm for best results. Finally, this research developed a Compromised SensorNode Detection Algorithm (CSDA) as a centralized algorithm processed at the base station for detecting compromised sensor nodes regardless of specific cause of the anomalies. To measure the effectiveness and accuracy of these algorithms, a comprehensive scenario was simulated. Noisy data were injected into the data randomly and the sensor nodes. The results showed that LODA achieved 89% accuracy in the prediction of the outliers, GODA detected anomalies up to 99% accurately and CSDA identified accurately up to 80% of the sensor nodes that have been compromised. In conclusion, the proposed algorithms have proven the anomaly detection locally and globally, and compromised sensor node detection in WSN

    Big data analytics for large-scale wireless networks: Challenges and opportunities

    Full text link
    © 2019 Association for Computing Machinery. The wide proliferation of various wireless communication systems and wireless devices has led to the arrival of big data era in large-scale wireless networks. Big data of large-scale wireless networks has the key features of wide variety, high volume, real-time velocity, and huge value leading to the unique research challenges that are different from existing computing systems. In this article, we present a survey of the state-of-art big data analytics (BDA) approaches for large-scale wireless networks. In particular, we categorize the life cycle of BDA into four consecutive stages: Data Acquisition, Data Preprocessing, Data Storage, and Data Analytics. We then present a detailed survey of the technical solutions to the challenges in BDA for large-scale wireless networks according to each stage in the life cycle of BDA. Moreover, we discuss the open research issues and outline the future directions in this promising area

    Unsupervised Machine Learning for Networking:Techniques, Applications and Research Challenges

    Get PDF
    While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently, there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications in various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances

    Unsupervised Machine Learning for Networking:Techniques, Applications and Research Challenges

    Get PDF
    While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances

    Security in 5G-Enabled Internet of Things Communication: Issues: Challenges, and Future Research Roadmap

    Get PDF
    5G mobile communication systems promote the mobile network to not only interconnect people, but also interconnect and control the machine and other devices. 5G-enabled Internet of Things (IoT) communication environment supports a wide-variety of applications, such as remote surgery, self-driving car, virtual reality, flying IoT drones, security and surveillance and many more. These applications help and assist the routine works of the community. In such communication environment, all the devices and users communicate through the Internet. Therefore, this communication agonizes from different types of security and privacy issues. It is also vulnerable to different types of possible attacks (for example, replay, impersonation, password reckoning, physical device stealing, session key computation, privileged-insider, malware, man-in-the-middle, malicious routing, and so on). It is then very crucial to protect the infrastructure of 5G-enabled IoT communication environment against these attacks. This necessitates the researchers working in this domain to propose various types of security protocols under different types of categories, like key management, user authentication/device authentication, access control/user access control and intrusion detection. In this survey paper, the details of various system models (i.e., network model and threat model) required for 5G-enabled IoT communication environment are provided. The details of security requirements and attacks possible in this communication environment are further added. The different types of security protocols are also provided. The analysis and comparison of the existing security protocols in 5G-enabled IoT communication environment are conducted. Some of the future research challenges and directions in the security of 5G-enabled IoT environment are displayed. The motivation of this work is to bring the details of different types of security protocols in 5G-enabled IoT under one roof so that the future researchers will be benefited with the conducted work

    Modélisation formelle des systÚmes de détection d'intrusions

    Get PDF
    L’écosystĂšme de la cybersĂ©curitĂ© Ă©volue en permanence en termes du nombre, de la diversitĂ©, et de la complexitĂ© des attaques. De ce fait, les outils de dĂ©tection deviennent inefficaces face Ă  certaines attaques. On distingue gĂ©nĂ©ralement trois types de systĂšmes de dĂ©tection d’intrusions : dĂ©tection par anomalies, dĂ©tection par signatures et dĂ©tection hybride. La dĂ©tection par anomalies est fondĂ©e sur la caractĂ©risation du comportement habituel du systĂšme, typiquement de maniĂšre statistique. Elle permet de dĂ©tecter des attaques connues ou inconnues, mais gĂ©nĂšre aussi un trĂšs grand nombre de faux positifs. La dĂ©tection par signatures permet de dĂ©tecter des attaques connues en dĂ©finissant des rĂšgles qui dĂ©crivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La dĂ©tection hybride repose sur plusieurs mĂ©thodes de dĂ©tection incluant celles sus-citĂ©es. Elle prĂ©sente l’avantage d’ĂȘtre plus prĂ©cise pendant la dĂ©tection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de rĂšgles de reconnaissance d’attaques. Le nombre d’attaques potentielles Ă©tant trĂšs grand, ces bases de rĂšgles deviennent rapidement difficiles Ă  gĂ©rer et Ă  maintenir. De plus, l’expression de rĂšgles avec Ă©tat dit stateful est particuliĂšrement ardue pour reconnaĂźtre une sĂ©quence d’évĂ©nements. Dans cette thĂšse, nous proposons une approche stateful basĂ©e sur les diagrammes d’état-transition algĂ©briques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de reprĂ©senter de façon graphique et modulaire une spĂ©cification, ce qui facilite la maintenance et la comprĂ©hension des rĂšgles. Nous Ă©tendons la notation ASTD avec de nouvelles fonctionnalitĂ©s pour reprĂ©senter des attaques complexes. Ensuite, nous spĂ©cifions plusieurs attaques avec la notation Ă©tendue et exĂ©cutons les spĂ©cifications obtenues sur des flots d’évĂ©nements Ă  l’aide d’un interprĂ©teur pour identifier des attaques. Nous Ă©valuons aussi les performances de l’interprĂ©teur avec des outils industriels tels que Snort et Zeek. Puis, nous rĂ©alisons un compilateur afin de gĂ©nĂ©rer du code exĂ©cutable Ă  partir d’une spĂ©cification ASTD, capable d’identifier de façon efficiente les sĂ©quences d’évĂ©nements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    A review of the use of artificial intelligence methods in infrastructure systems

    Get PDF
    The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems
    • 

    corecore