41 research outputs found

    Big Data and Neural Networks in Smart Grid - Part 2: The Impact of Piecewise Monotonic Data Approximation Methods on the Performance of Neural Network Identification Methodology for the Distribution Line and Branch Line Length Approximation of Overhead Low-Voltage Broadband over Powerlines Networks

    Get PDF
    Τhe impact of measurement differences that follow continuous uniform distributions (CUDs) of different intensities on the performance of the Neural Network Identification Methodology for the distribution line and branch Line Length Approximation (NNIM-LLA) of the overhead low-voltage broadband over powerlines (OV LV BPL) topologies has been assessed in [1]. When the αCUD values of the applied CUD measurement differences remain low and below 5dB, NNIM-LLA may internally and satisfactorily cope with the CUD measurement differences. However, when the αCUD values of CUD measurement differences exceed approximately 5dB, external countermeasure techniques against the measurement differences are required to be applied to the contaminated data prior to their handling by NNIM-LLA. In this companion paper, the impact of piecewise monotonic data approximation methods, such as L1PMA and L2WPMA of the literature, on the performance of NNIM-LLA of OV LV BPL topologies is assessed when CUD measurement differences of various αCUD values are applied. The key findings that are going to be discussed in this companion paper are: (i) The crucial role of the applied numbers of monotonic sections of the L1PMA and L2WPMA for the overall performance improvement of NNIM-LLA approximations as well as the dependence of the applied numbers of monotonic sections on the complexity of the examined OV LV BPL topology classes; and (ii) the performance comparison of the piecewise monotonic data approximation methods of this paper against the one of more elaborated versions of the default operation settings in order to reveal the most suitable countermeasure technique against the CUD measurement differences in OV LV BPL topologies.Citation: Lazaropoulos, A. G., & Leligou, H. C. (2024). Big Data and Neural Networks in Smart Grid - Part 2: The Impact of Piecewise Monotonic Data Approximation Methods on the Performance of Neural Network Identification Methodology for the Distribution Line and Branch Line Length Approximation of Overhead Low-Voltage Broadband over Powerlines Networks. Trends in Renewable Energy, 10, 67-97. doi: https://doi.org/10.17737/tre.2024.10.1.0016

    Artificial Intelligence, Machine Learning and Neural Networks for Tomography in Smart Grid – Performance Comparison between Topology Identification Methodology and Neural Network Identification Methodology for the Distribution Line and Branch Line Length Approximation of Overhead Low-Voltage Broadband over Power Lines Network Topologies

    Get PDF
    Until now, the neural network identification methodology for the branch number identification (NNIM-BNI) has identified the number of branches for a given overhead low-voltage broadband over powerlines (OV LV BPL) topology channel attenuation behavior [1]. In this extension paper, NNIM-BNI is extended so that the lengths of the distribution lines and branch lines for a given OV LV BPL topology channel attenuation behavior can be approximated; say, the tomography of the OV LV BPL topology. NNIM exploits the Deterministic Hybrid Model (DHM) and the OV LV BPL topology database of Topology Identification Methodology (TIM). By following the same methodology of the original paper, the results of the neural network identification methodology for the distribution line and branch line length approximation (NNIM-LLA) are compared against the ones of the newly proposed TIM-based methodology, denoted as TIM-LLA.Citation: Lazaropoulos, A. G., and Leligou, H. C. (2023). Artificial Intelligence, Machine Learning and Neural Networks for Tomography in Smart Grid – Performance Comparison between Topology Identification Methodology and Neural Network Identification Methodology for the Distribution Line and Branch Line Length Approximation of Overhead Low-Voltage Broadband over Power Lines Network Topologies. Trends in Renewable Energy, 9, 34-77. DOI: 10.17737/tre.2023.9.1.0014

    Big Data and Neural Networks in Smart Grid - Part 1: The Impact of Measurement Differences on the Performance of Neural Network Identification Methodologies of Overhead Low-Voltage Broadband over Power Lines Networks

    Get PDF
    Until now, the neural network identification methodology for the branch number identification (NNIM-BNI) and the neural network identification methodology for the distribution line and branch line length approximation (NNIM-LLA) have approximated the number of branches and the distribution line and branch line lengths given the theoretical channel attenuation behavior of the examined overhead low-voltage broadband over powerlines (OV LV BPL) topologies [1], [2]. The impact of measurement differences that follow continuous uniform distribution (CUDs) of different intensities on the performance of NNIM-BNI and NNIM-LLA is assessed in this paper. The countermeasure of the application of OV LV BPL topology databases of higher accuracy is here investigated in the case of NNIM-LLA. The strong inherent mitigation efficiency of NNIM-BNI and NNIM-LLA against CUD measurement differences and especially against those of low intensities is the key finding of this paper. The other two findings that are going to be discussed in this paper are: (i) The dependence of the approximation Root-Mean-Square Deviation (RMSD) stability of NNIM-BNI and NNIM-LLA on the applied default operation settings; and (ii) the proposal of more elaborate countermeasure techniques from the literature against CUD measurement differences aiming at improving NNIM-LLA approximations.Citation: Lazaropoulos, A. G., & Leligou, H. C. (2024). Big Data and Neural Networks in Smart Grid - Part 2: The Impact of Piecewise Monotonic Data Approximation Methods on the Performance of Neural Network Identification Methodology for the Distribution Line and Branch Line Length Approximation of Overhead Low-Voltage Broadband over Powerlines Networks. Trends in Renewable Energy, 10, 30-66. doi: https://doi.org/10.17737/tre.2024.10.1.0016

    Rule-based with machine learning IDS for DDoS attack detection in cyber-physical production systems (CPPS)

    Get PDF
    Recent advancements in communication technology have transformed the way the industrial system works. This digitalization has improved the way of communication between different actors involved in cyber physical production systems (CPPS), such as users, suppliers, and manufacturers, thus making the whole process transparent. The utilization of emerging new technologies in CPPS can cause vulnerable spots that can be exploited by attackers to launch sophisticated distributed denial of service (DDoS) attacks, hence threatening the availability of the production systems. Existing machine learning based intrusion detection systems (IDS) often rely on unrealistic datasets for training and validation, thus missing the crucial testing phase with real-time scenarios. The results generated by the ML models are based on predictions at each flow level and cannot provide summarized information about malicious entities. To address this limitation, this study proposed an efficient IDS system that uses both rule-based detection and ML-based approaches to detect DDoS attacks damaging the infrastructure of CPPS. For training and validation of the system, we use real-time network traffic extracted from a real industrial scenario, referred to as Farm-to-Fork (F2F) supply chain system. Both, attacks and normal traffic were captured, and bidirectional features were extracted through CIC-FLOWMETER. We make use of 8 ML supervised and unsupervised approaches to detect the malicious flows; and then a rule-based detection mechanism is used to calculate the frequency of the malicious flows and to assign different severity levels based on the computed frequency. The overall results show that supervised models outperform unsupervised approaches and achieve an accuracy 99.97% and TPR 99.96%. Overall, the weighted accuracy when tested and deployed in a real-time scenario is around 98.71%. The results prove that the system works better when considering real-time scenarios and provides comprehensive information about the detected results that can be used to take different mitigation actions.This work was supported in part by European Union’s Horizon Europe (PHOENi2X) under Grant 101070586, in part by the Spanish Ministry of Science and Innovation funded by MCIN/AEI/10.13039/501100011033 under Grant PID2021-124463OB-I00, in part by ERDF a way of making Europe, and in part by the Catalan Government under Contract 2021 SGR 00326.Peer ReviewedPostprint (published version

    Evaluation of a blockchain-enabled resource management mechanism for NGNs

    Full text link
    A new era in ICT has begun with the evolution of Next Generation Networks (NGNs) and the development of human-centric applications. Ultra-low latency, high throughput, and high availability are a few of the main characteristics of modern networks. Network Providers (NPs) are responsible for the development and maintenance of network infrastructures ready to support the most demanding applications that should be available not only in urban areas but in every corner of the earth. The NPs must collaborate to offer high-quality services and keep their overall cost low. The collaboration among competitive entities can in principle be regulated by a trusted 3rd party or by a distributed approach/technology which can guarantee integrity, security, and trust. This paper examines the use of blockchain technology for resource management and negotiation among NPs and presents the results of experiments conducted in a dedicated real testbed. The implementation of the resource management mechanism is described in a Smart Contract (SC) and the testbeds use the Raft and the IBFT consensus mechanisms respectively. The goal of this paper is two-fold: to assess its performance in terms of transaction throughput and latency so that we can assess the granularity at which this solution can operate (e.g. support resource re-allocation among NPs on micro-service level or not) and define implementation-specific parameters like the consensus mechanism that is the most suitable for this use case based on performance metrics

    5G technologies boosting efficient mobile learning

    Full text link
    The needs for education, learning and training proliferate primarily due to the facts that economy becomes more and more knowledge based (mandating continuous lifelong learning) and people migrate among countries, which introduces the need for learning other languages, for training on different skills and learning about the new cultural and societal framework. Given that in parallel, time schedules continuously become tighter, learning through mobile devices continuously gains in popularity as it allows for learning anytime, anywhere. To increase the learning efficiency, personalisation (in terms of selecting the learning content, type and presentation) and adaptation of the learning experience in real time based on the experienced affect state are key instruments. All these user requirements challenge the current network architectures and technologies. In this paper, we investigate the requirements implied by efficient mobile learning scenarios and we explore how 5G technologies currently under design/testing/validation and standardisation meet these requirements

    Extreme Level Crossing Rate: A New Performance Indicator for URLLC Systems

    Full text link
    Level crossing rate (LCR) is a well-known statistical tool that is related to the duration of a random stationary fading process \emph{on average}. In doing so, LCR cannot capture the behavior of \emph{extremely rare} random events. Nonetheless, the latter events play a key role in the performance of ultra-reliable and low-latency communication systems rather than their average (expectation) counterparts. In this paper, for the first time, we extend the notion of LCR to address this issue and sufficiently characterize the statistical behavior of extreme maxima or minima. This new indicator, entitled as extreme LCR (ELCR), is analytically introduced and evaluated by resorting to the extreme value theory and risk assessment. Capitalizing on ELCR, some key performance metrics emerge, i.e., the maximum outage duration, minimum effective duration, maximum packet error rate, and maximum transmission delay. They are all derived in simple closed-form expressions. The theoretical results are cross-compared and verified via extensive simulations whereas some useful engineering insights are manifested.Comment: Accepted for publication in IEEE TV

    Impact of Inter-Operator Interference via Reconfigurable Intelligent Surfaces

    Full text link
    A wireless communication system is studied that operates in the presence of multiple reconfigurable intelligent surfaces (RISs). In particular, a multi-operator environment is considered where each operator utilizes an RIS to enhance its communication quality. Although out-of-band interference does not exist (since each operator uses isolated spectrum resources), RISs controlled by different operators do affect the system performance of one another due to the inherently rapid phase shift adjustments that occur on an independent basis. The system performance of such a communication scenario is analytically studied for the practical case where discrete-only phase shifts occur at RIS. The proposed framework is quite general since it is valid under arbitrary channel fading conditions as well as the presence (or not) of the transceiver's direct link. Finally, the derived analytical results are verified via numerical and simulation trial as well as some novel and useful engineering outcomes are manifested

    Cybersecurity in supply chain systems: the Farm-to-Fork use case

    Get PDF
    Modern supply chains comprise an increasing number of actors which deploy different information technology systems that capture information of a diverse nature and diverse sources (from sensors to order information). While the benefits of the automatic exchange of information between these systems have been recognized and have led to their interconnection, protecting the whole supply chain from potential attacks is a challenging issue given the attack proliferation reported in the literature. In this paper, we present the FISHY platform, which anticipates protecting the whole supply chain from potential attacks by (a) adopting novel technologies and approaches including machine learning-based tools to detect security threats and recommend mitigation policies and (b) employing blockchain-based tools to provide evidence of the captured events and suggested policies. This platform is also easily expandable to protect against additional attacks in the future. We experiment with this platform in the farm-to-fork supply chain to prove its operation and capabilities. The results show that the FISHY platform can effectively be used to protect the supply chain and offers high flexibility to its users.This article has partially been supported by the EU funded H2020 FISHY Project (Grant agreement ID: 952644)
    corecore