136 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    An investigation into 88 KV surge arrester failures in the Eskom east grid traction network

    Get PDF
    The Eskom East Grid Traction Network (EGTN) supplying traction loads and distribution networks has experienced at least one surge arrester failure over the past ten years. These failures results in poor network reliability and customer dissatisfactions which are often overlooked. This is because reliability indices used in the reliability evaluation of transmission and distribution networks are different. It is suspected that fast transient faults in this network initiate system faults leading to surge arrester design parameter exceedances and poor network insulation coordination. Preliminary investigations in network suggest that transient studies were not done during network planning and design stages. This may have resulted in the lack of surge arrester parameter evaluations under transient conditions leading to improper surge arresters being selected and installed in this network resulting in surge arrester failures that are now evident. These failures may also have been exacerbated by the dynamic nature of traction loads as they are highly unbalanced, have poor power factors and emit high voltage distortions. Poor in-service conditions such as defects, insulation partial discharges and overheating, bolted faults in the network and quality of supply emissions can also contribute to surge arrester failures. To address problems arising with different reliability indices in these networks the reliability of the EGTN is evaluated. In this work the reliability evaluation of the EGTN is done by computing common distribution reliability indices using analytic and simulation methods. This is done by applying the analytic method in the EGTN by assessing network failure modes and effects analysis (FMEA) when the surge arrester fails in this network. The simulation method is applied by applying and modifying the MATLAB code proposed by Shavuka et al. [1]. These reliability indices are then compared with transmission reliability indices over the same period. This attempts to standardize reliability evaluations in these networks. To assess the impact of transient faults in the surge arrester parameter evaluation the EGTN is modelled and simulated by initiating transient faults sequentially in the network at different nodes and under different loading conditions. This is done by using Power System Blockset (PSB), Power System Analysis Toolbox (PSAT) and Alternate Transient Program (ATP) simulation tools and computing important surge arrester parameters i.e. continuous operating voltage, rated voltage, discharge current and energy absorption capability (EAC). These parameters are assessed by in the EGTN by evaluating computed surge arrester parameters against parameters provided by manufacturers, the Eskom 88 kV surge arrester specification and those parameters recommended in IEC 60099-4. To assess the impact and contribution of in-service conditions, faults and quality of supply emissions in surge arrester failures these contributing factors are investigated by assessing infra-red scans, fault analysis reports, results of the sampled faulted surge arrester in this network and quality of supply parameters around the time of failures. This study found that Eskom transmission and distribution network reliability indices can be standardized as distribution reliability indices i.e. SAIDI, SAIFI, CAIDI, ASAI and ASUI indices are similar to Eskom transmission indices i.e. SM, NOI, circuit availability index and circuit unavailability index respectively. Transient simulations in this study showed that certain surge arresters in the EGTN had their rated surge arrester parameters exceeded under certain transient conditions and loading conditions. These surge arresters failed as their discharge currents and EACs were exceeded under heavy and light network loading conditions. This study concluded that surge arresters whose discharge currents and EACs exceeded were improperly evaluated and selected prior to their installations in the EGTN. This study found the EAC to be the most import parameter in surge arrester performance evaluations. The Eskom 88 kV surge arrester specification was found to be inadequate, inaccurate and ambiguous as a number of inconsistencies in the usage of IEEE and IEC classified systems terminology were found. It was concluded that these inconsistencies may have led to confusions for manufacturers during surge arrester designs and selections in the EGTN. The evaluation of fault reports showed that two surge arrester failures in this network were caused by hardware failures such as conductor failure and poor network operating as the line was continuously closed onto a fault. There was no evidence that poor in-service and quality of supply emissions contributed to surge arrester failures in this network. PSB, PSAT and ATP simulation tools were found adequate in modelling and simulating the EGTN. However the PSB tool was found to be slow as the network expanded and the PSAT required user defined surge arrester models requiring detailed manufacture data sheets which are not readily available. ATP was found to be superior in terms of speed and accuracy in comparison to the PSB and PSAT tools. The MATLAB code proposed by Shavuka et al. [1] was found to be suitable and accurate in assessing transmission networks as EGTN's reliability indices computed from this code were comparable to benchmarked Eskom distribution reliability indices. The work carried out in this research will assist in improving surge arrester performance evaluations, the current surge arrester specification and surge arrester selections. Simulation tools utilized in this work show great potential in achieving this. Reliability studies conducted in this work will assist in standardizing reliability indices between Eskom's transmission and distribution divisions. In-service condition assessment carried out in this work will improve surge arrester condition monitoring and preventive maintenance practices

    Flexible Hardware-based Security-aware Mechanisms and Architectures

    Get PDF
    For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated. Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements. In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches

    Structured parallelism discovery with hybrid static-dynamic analysis and evaluation technique

    Get PDF
    Parallel computer architectures have dominated the computing landscape for the past two decades; a trend that is only expected to continue and intensify, with increasing specialization and heterogeneity. This creates huge pressure across the software stack to produce programming languages, libraries, frameworks and tools which will efficiently exploit the capabilities of parallel computers, not only for new software, but also revitalizing existing sequential code. Automatic parallelization, despite decades of research, has had limited success in transforming sequential software to take advantage of efficient parallel execution. This thesis investigates three approaches that use commutativity analysis as the enabler for parallelization. This has the potential to overcome limitations of traditional techniques. We introduce the concept of liveness-based commutativity for sequential loops. We examine the use of a practical analysis utilizing liveness-based commutativity in a symbolic execution framework. Symbolic execution represents input values as groups of constraints, consequently deriving the output as a function of the input and enabling the identification of further program properties. We employ this feature to develop an analysis and discern commutativity properties between loop iterations. We study the application of this approach on loops taken from real-world programs in the OLDEN and NAS Parallel Benchmark (NPB) suites, and identify its limitations and related overheads. Informed by these findings, we develop Dynamic Commutativity Analysis (DCA), a new technique that leverages profiling information from program execution with specific input sets. Using profiling information, we track liveness information and detect loop commutativity by examining the code’s live-out values. We evaluate DCA against almost 1400 loops of the NPB suite, discovering 86% of them as parallelizable. Comparing our results against dependence-based methods, we match the detection efficacy of two dynamic and outperform three static approaches, respectively. Additionally, DCA is able to automatically detect parallelism in loops which iterate over Pointer-Linked Data Structures (PLDSs), taken from wide range of benchmarks used in the literature, where all other techniques we considered failed. Parallelizing the discovered loops, our methodology achieves an average speedup of 3.6× across NPB (and up to 55×) and up to 36.9× for the PLDS-based loops on a 72-core host. We also demonstrate that our methodology, despite relying on specific input values for profiling each program, is able to correctly identify parallelism that is valid for all potential input sets. Lastly, we develop a methodology to utilize liveness-based commutativity, as implemented in DCA, to detect latent loop parallelism in the shape of patterns. Our approach applies a series of transformations which subsequently enable multiple applications of DCA over the generated multi-loop code section and match its loop commutativity outcomes against the expected criteria for each pattern. Applying our methodology on sets of sequential loops, we are able to identify well-known parallel patterns (i.e., maps, reduction and scans). This extends the scope of parallelism detection to loops, such as those performing scan operations, which cannot be determined as parallelizable by simply evaluating liveness-based commutativity conditions on their original form

    Optimal coordination of energy sources for microgrid incorporating concepts of locational marginal pricing and energy storage

    Get PDF
    This research aims to coordinate energy sources for standalone microgrid (MG), incorporating locational marginal pricing (LMP) and energy storage. Two approaches are suggested for the optimal energy management of MG. First, the energy management of a standalone MG is performed utilising the concept of LMP. The objective is to minimise the average LMP to reduce network congestion and power loss costs. Second, energy management is performed using a dual-stage energy management approach. A BESS model is formulated considering charging and discharging characteristics and utilised in this research for dual-stage energy management. The impact of the battery state of charge (SOC) is assessed in the optimal day-ahead operation. An incremental cost factor is included with battery SOC when calculating the system operating cost. A new binary jellyfish search algorithm (BJSA) is developed to solve energy management problems. The suggested BJSA technique is implemented in solving the optimal energy management of MG considering LMP. The simulations of the suggested approach are conducted on the IEEE 14 and 30-bus test systems. Results show that the BJSA technique is more consistent than the binary particle swarm optimisation (BPSO) technique in determining the optimal solution. In addition, the BJSA technique is employed to solve the dual-stage energy management of MG considering BESS. The proposed approach is simulated on the IEEE 14 and 30-bus systems. Results also show that the BJSA technique is superior to the BPSO technique in minimising the operating cost in real-time economic dispatch (ED). The performance of the BJSA and BPSO techniques is exactly similar to the UC schedule with and without BESS considering the IEEE 30-bus system, like the IEEE 14-bus system. The BJSA technique minimises operating costs by up to 5% over the BPSO technique for the UC schedule with power loss. Operating costs are reduced by up to 5% using the BJSA technique rather than the BPSO technique for real-time ED with BESS. However, the BPSO technique is inconsistent and fails to obtain the same results for the IEEE 30-bus system. Overall, the findings confirm the superiority of the suggested BJSA technique and the suggested optimisation approaches in optimising the energy management of MG

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    corecore