126 research outputs found

    Root Cause Analysis for Autonomous Optical Network Security Management

    Get PDF
    The ongoing evolution of optical networks towards autonomous systems supporting high-performance services beyond 5G requires advanced functionalities for automated security management. To cope with evolving threat landscape, security diagnostic approaches should be able to detect and identify the nature not only of existing attack techniques, but also those hitherto unknown or insufficiently represented. Machine Learning (ML)-based algorithms perform well when identifying known attack types, but cannot guarantee precise identification of unknown attacks. This makes Root Cause Analysis (RCA) crucial for enabling timely attack response when human intervention is unavoidable. We address these challenges by establishing an ML-based framework for security assessment and analyzing RCA alternatives for physical-layer attacks. We first scrutinize different Network Management System (NMS) architectures and the corresponding security assessment capabilities. We then investigate the applicability of supervised and unsupervised learning (SL and UL) approaches for RCA and propose a novel UL-based RCA algorithm called Distance-Based Root Cause Analysis (DB-RCA). The framework’s applicability and performance for autonomous optical network security management is validated on an experimental physical-layer security dataset, assessing the benefits and drawbacks of the SL-and UL-based RCA. Besides confirming that SL-based approaches can provide precise RCA output for known attack types upon training, we show that the proposed UL-based RCA approach offers meaningful insight into the anomalies caused by novel attack types, thus supporting the human security officers in advancing the physical-layer security diagnostics

    On the Intersection of Explainable and Reliable AI for physical fatigue prediction

    Get PDF
    In the era of Industry 4.0, the use of Artificial Intelligence (AI) is widespread in occupational settings. Since dealing with human safety, explainability and trustworthiness of AI are even more important than achieving high accuracy. eXplainable AI (XAI) is investigated in this paper to detect physical fatigue during manual material handling task simulation. Besides comparing global rule-based XAI models (LLM and DT) to black-box models (NN, SVM, XGBoost) in terms of performance, we also compare global models with local ones (LIME over XGBoost). Surprisingly, global and local approaches achieve similar conclusions, in terms of feature importance. Moreover, an expansion from local rules to global rules is designed for Anchors, by posing an appropriate optimization method (Anchors coverage is enlarged from an original low value, 11%, up to 43%). As far as trustworthiness is concerned, rule sensitivity analysis drives the identification of optimized regions in the feature space, where physical fatigue is predicted with zero statistical error. The discovery of such “non-fatigue regions” helps certifying the organizational and clinical decision making

    Explainable Artificial Intelligence in communication networks: A use case for failure identification in microwave networks

    Get PDF
    Artificial Intelligence (AI) has demonstrated superhuman capabilities in solving a significant number of tasks, leading to widespread industrial adoption. For in-field network-management application, AI-based solutions, however, have often risen skepticism among practitioners as their internal reasoning is not exposed and their decisions cannot be easily explained, preventing humans from trusting and even understanding them. To address this shortcoming, a new area in AI, called Explainable AI (XAI), is attracting the attention of both academic and industrial researchers. XAI is concerned with explaining and interpreting the internal reasoning and the outcome of AI-based models to achieve more trustable and practical deployment. In this work, we investigate the application of XAI for network management, focusing on the problem of automated failure-cause identification in microwave networks. We first introduce the concept of XAI, highlighting its advantages in the context of network management, and we discuss in detail the concept behind Shapley Additive Explanations (SHAP), the XAI framework considered in our analysis. Then, we propose a framework for a XAI-assisted ML-based automated failure-cause identification in microwave networks, spanning model's development and deployment phases. For the development phase, we show how to exploit SHAP for feature selection and how to leverage SHAP to inspect misclassified instances during model's development process, and how to describe model's global behavior based on SHAP's global explanations. For the deployment phase, we propose a framework based on predictions uncertainty to detect possibly wrong predictions that will be inspected through XAI

    Learning-based NLOS Detection and Uncertainty Prediction of GNSS Observations with Transformer-Enhanced LSTM Network

    Full text link
    The global navigation satellite systems (GNSS) play a vital role in transport systems for accurate and consistent vehicle localization. However, GNSS observations can be distorted due to multipath effects and non-line-of-sight (NLOS) receptions in challenging environments such as urban canyons. In such cases, traditional methods to classify and exclude faulty GNSS observations may fail, leading to unreliable state estimation and unsafe system operations. This work proposes a Deep-Learning-based method to detect NLOS receptions and predict GNSS pseudorange errors by analyzing GNSS observations as a spatio-temporal modeling problem. Compared to previous works, we construct a transformer-like attention mechanism to enhance the long short-term memory (LSTM) networks, improving model performance and generalization. For the training and evaluation of the proposed network, we used labeled datasets from the cities of Hong Kong and Aachen. We also introduce a dataset generation process to label the GNSS observations using lidar maps. In experimental studies, we compare the proposed network with a deep-learning-based model and classical machine-learning models. Furthermore, we conduct ablation studies of our network components and integrate the NLOS detection with data out-of-distribution in a state estimator. As a result, our network presents improved precision and recall ratios compared to other models. Additionally, we show that the proposed method avoids trajectory divergence in real-world vehicle localization by classifying and excluding NLOS observations.Comment: Accepted for the IEEE ITSC202

    Multi-Level Data-Driven Battery Management: From Internal Sensing to Big Data Utilization

    Get PDF
    Battery management system (BMS) is essential for the safety and longevity of lithium-ion battery (LIB) utilization. With the rapid development of new sensing techniques, artificial intelligence and the availability of huge amounts of battery operational data, data-driven battery management has attracted ever-widening attention as a promising solution. This review article overviews the recent progress and future trend of data-driven battery management from a multi-level perspective. The widely-explored data-driven methods relying on routine measurements of current, voltage, and surface temperature are reviewed first. Within a deeper understanding and at the microscopic level, emerging management strategies with multi-dimensional battery data assisted by new sensing techniques have been reviewed. Enabled by the fast growth of big data technologies and platforms, the efficient use of battery big data for enhanced battery management is further overviewed. This belongs to the upper and the macroscopic level of the data-driven BMS framework. With this endeavor, we aim to motivate new insights into the future development of next-generation data-driven battery management

    Machine learning for optical fiber communication systems: An introduction and overview

    Get PDF
    Optical networks generate a vast amount of diagnostic, control and performance monitoring data. When information is extracted from this data, reconfigurable network elements and reconfigurable transceivers allow the network to adapt both to changes in the physical infrastructure but also changing traffic conditions. Machine learning is emerging as a disruptive technology for extracting useful information from this raw data to enable enhanced planning, monitoring and dynamic control. We provide a survey of the recent literature and highlight numerous promising avenues for machine learning applied to optical networks, including explainable machine learning, digital twins and approaches in which we embed our knowledge into the machine learning such as physics-informed machine learning for the physical layer and graph-based machine learning for the networking layer

    Topological changes in data-driven dynamic security assessment for power system control

    Get PDF
    The integration of renewable energy sources into the power system requires new operating paradigms. The higher uncertainty in generation and demand makes the operations much more dynamic than in the past. Novel operating approaches that consider these new dynamics are needed to operate the system close to its physical limits and fully utilise the existing grid assets. Otherwise, expensive investments in redundant grid infrastructure become necessary. This thesis reviews the key role of digitalisation in the shift toward a decarbonised and decentralised power system. Algorithms based on advanced data analytic techniques and machine learning are investigated to operate the system assets at the full capacity while continuously assessing and controlling security. The impact of topological changes on the performance of these data-driven approaches is studied and algorithms to mitigate this impact are proposed. The relevance of this study resides in the increasingly higher frequency of topological changes in modern power systems and in the need to improve the reliability of digitalised approaches against such changes to reduce the risks of relying on them. A novel physics-informed approach to select the most relevant variables (or features) to the dynamic security of the system is first proposed and then used in two different three-stages workflows. In the first workflow, the proposed feature selection approach allows to train classification models from machine learning (or classifiers) close to real-time operation improving their accuracy and robustness against uncertainty. In the second workflow, the selected features are used to define a new metric to detect high-impact topological changes and train new classifiers in response to such changes. Subsequently, the potential of corrective control for a dynamically secure operation is investigated. By using a neural network to learn the safety certificates for the post-fault system, the corrective control is combined with preventive control strategies to maintain the system security and at the same time reduce operational costs and carbon emissions. Finally, exemplary changes in assumptions for data-driven dynamic security assessment when moving from high inertia to low inertia systems are questioned, confirming that using machine learning based models will make significantly more sense in future systems. Future research directions in terms of data generation and model reliability of advanced digitalised approaches for dynamic security assessment and control are finally indicated.Open Acces
    • …
    corecore