23 research outputs found

    A New Incremental Decision Tree Learning for Cyber Security based on ILDA and Mahalanobis Distance

    Get PDF
    A cyber-attack detection is currently essential for computer network protection. The fundamentals of protection are to detect cyber-attack effectively with the ability to combat it in various ways and with constant data learning such as internet traffic. With these functions, each cyber-attack can be memorized and protected effectively any time. This research will present procedures for a cyber-attack detection system Incremental Decision Tree Learning (IDTL) that use the principle through Incremental Linear Discriminant Analysis (ILDA) together with Mahalanobis distance for classification of the hierarchical tree by reducing data features that enhance classification of a variety of malicious data. The proposed model can learn a new incoming datum without involving the previous learned data and discard this datum after being learned. The results of the experiments revealed that the proposed method can improve classification accuracy as compare with other methods. They showed the highest accuracy when compared to other methods. If comparing with the effectiveness of each class, it was found that the proposed method can classify both intrusion datasets and other datasets efficiently

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Deep Learning-Based Intrusion Detection Systems: A Systematic Review

    Get PDF
    Nowadays, the ever-increasing complication and severity of security attacks on computer networks have inspired security researchers to incorporate different machine learning methods to protect the organizations’ data and reputation. Deep learning is one of the exciting techniques which recently are vastly employed by the IDS or intrusion detection systems to increase their performance in securing the computer networks and hosts. This survey article focuses on the deep learning-based intrusion detection schemes and puts forward an in-depth survey and classification of these schemes. It first presents the primary background concepts about IDS architecture and various deep learning techniques. It then classifies these schemes according to the type of deep learning methods utilized in each of them. It describes how deep learning networks are utilized in the intrusion detection process to recognize intrusions accurately. Finally, a complete analysis of the investigated IDS frameworks is provided, and concluding remarks and future directions are highlighted

    A Review of Rule Learning Based Intrusion Detection Systems and Their Prospects in Smart Grids

    Get PDF

    Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

    Full text link
    Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy

    Blockchain-enabled cybersecurity provision for scalable heterogeneous network: A comprehensive survey

    Get PDF
    Blockchain-enabled cybersecurity system to ensure and strengthen decentralized digital transaction is gradually gaining popularity in the digital era for various areas like finance, transportation, healthcare, education, and supply chain management. Blockchain interactions in the heterogeneous network have fascinated more attention due to the authentication of their digital application exchanges. However, the exponential development of storage space capabilities across the blockchain-based heterogeneous network has become an important issue in preventing blockchain distribution and the extension of blockchain nodes. There is the biggest challenge of data integrity and scalability, including significant computing complexity and inapplicable latency on regional network diversity, operating system diversity, bandwidth diversity, node diversity, etc., for decision-making of data transactions across blockchain-based heterogeneous networks. Data security and privacy have also become the main concerns across the heterogeneous network to build smart IoT ecosystems. To address these issues, today’s researchers have explored the potential solutions of the capability of heterogeneous network devices to perform data transactions where the system stimulates their integration reliably and securely with blockchain. The key goal of this paper is to conduct a state-of-the-art and comprehensive survey on cybersecurity enhancement using blockchain in the heterogeneous network. This paper proposes a full-fledged taxonomy to identify the main obstacles, research gaps, future research directions, effective solutions, and most relevant blockchain-enabled cybersecurity systems. In addition, Blockchain based heterogeneous network framework with cybersecurity is proposed in this paper to meet the goal of maintaining optimal performance data transactions among organizations. Overall, this paper provides an in-depth description based on the critical analysis to overcome the existing work gaps for future research where it presents a potential cybersecurity design with key requirements of blockchain across a heterogeneous network

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    Intrusion detection in IoT networks using machine learning

    Get PDF
    The exponential growth of Internet of Things (IoT) infrastructure has introduced significant security challenges due to the large-scale deployment of interconnected devices. IoT devices are present in every aspect of our modern life; they are essential components of Industry 4.0, smart cities, and critical infrastructures. Therefore, the detection of attacks on this platform becomes necessary through an Intrusion Detection Systems (IDS). These tools are dedicated hardware devices or software that monitors a network to detect and automatically alert the presence of malicious activity. This study aimed to assess the viability of Machine Learning Models for IDS within IoT infrastructures. Five classifiers, encompassing a spectrum from linear models like Logistic Regression, Decision Trees from Trees Algorithms, Gaussian NaĂŻve Bayes from Probabilistic models, Random Forest from ensemble family and Multi-Layer Perceptron from Artificial Neural Networks, were analysed. These models were trained using supervised methods on a public IoT attacks dataset, with three tasks ranging from binary classification (determining if a sample was part of an attack) to multiclassification of 8 groups of attack categories and the multiclassification of 33 individual attacks. Various metrics were considered, from performance to execution times and all models were trained and tuned using cross-validation of 10 k-folds. On the three classification tasks, Random Forest was found to be the model with best performance, at expenses of time consumption. Gaussian NaĂŻve Bayes was the fastest algorithm in all classificationÂżs tasks, but with a lower performance detecting attacks. Whereas Decision Trees shows a good balance between performance and processing speed. Classifying among 8 attack categories, most models showed vulnerabilities to specific attack types, especially those in minority classes due to dataset imbalances. In more granular 33 attack type classifications, all models generally faced challenges, but Random Forest remained the most reliable, despite vulnerabilities. In conclusion, Machine Learning algorithms proves to be effective for IDS in IoT infrastructure, with Random Forest model being the most robust, but with Decision Trees offering a good balance between speed and performance.Objectius de Desenvolupament Sostenible::9 - IndĂşstria, InnovaciĂł i Infraestructur

    Incremental learning algorithms and applications

    Get PDF
    International audienceIncremental learning refers to learning from streaming data, which arrive over time, with limited memory resources and, ideally, without sacrificing model accuracy. This setting fits different application scenarios where lifelong learning is relevant, e.g. due to changing environments , and it offers an elegant scheme for big data processing by means of its sequential treatment. In this contribution, we formalise the concept of incremental learning, we discuss particular challenges which arise in this setting, and we give an overview about popular approaches, its theoretical foundations, and applications which emerged in the last years
    corecore