187 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Software Fault Tolerance in Real-Time Systems: Identifying the Future Research Questions

    Get PDF
    Tolerating hardware faults in modern architectures is becoming a prominent problem due to the miniaturization of the hardware components, their increasing complexity, and the necessity to reduce the costs. Software-Implemented Hardware Fault Tolerance approaches have been developed to improve the system dependability to hardware faults without resorting to custom hardware solutions. However, these come at the expense of making the satisfaction of the timing constraints of the applications/activities harder from a scheduling standpoint. This paper surveys the current state of the art of fault tolerance approaches when used in the context real-time systems, identifying the main challenges and the cross-links between these two topics. We propose a joint scheduling-failure analysis model that highlights the formal interactions among software fault tolerance mechanisms and timing properties. This model allows us to present and discuss many open research questions with the final aim to spur the future research activities

    Analysis of Embedded Controllers Subject to Computational Overruns

    Get PDF
    Microcontrollers have become an integral part of modern everyday embedded systems, such as smart bikes, cars, and drones. Typically, microcontrollers operate under real-time constraints, which require the timely execution of programs on the resource-constrained hardware. As embedded systems are becoming increasingly more complex, microcontrollers run the risk of violating their timing constraints, i.e., overrunning the program deadlines. Breaking these constraints can cause severe damage to both the embedded system and the humans interacting with the device. Therefore, it is crucial to analyse embedded systems properly to ensure that they do not pose any significant danger if the microcontroller overruns a few deadlines.However, there are very few tools available for assessing the safety and performance of embedded control systems when considering the implementation of the microcontroller. This thesis aims to fill this gap in the literature by presenting five papers on the analysis of embedded controllers subject to computational overruns. Details about the real-time operating system's implementation are included into the analysis, such as what happens to the controller's internal state representation when the timing constraints are violated. The contribution includes theoretical and computational tools for analysing the embedded system's stability, performance, and real-time properties.The embedded controller is analysed under three different types of timing violations: blackout events (when no control computation is completed during long periods), weakly-hard constraints (when the number of deadline overruns is constrained over a window), and stochastic overruns (when violations of timing constraints are governed by a probabilistic process). These scenarios are combined with different implementation policies to reduce the gap between the analysis and its practical applicability. The analyses are further validated with a comprehensive experimental campaign performed on both a set of physical processes and multiple simulations.In conclusion, the findings of this thesis reveal that the effect deadline overruns have on the embedded system heavily depends the implementation details and the system's dynamics. Additionally, the stability analysis of embedded controllers subject to deadline overruns is typically conservative, implying that additional insights can be gained by also analysing the system's performance

    A review of commercialisation mechanisms for carbon dioxide removal

    Get PDF
    The deployment of carbon dioxide removal (CDR) needs to be scaled up to achieve net zero emission pledges. In this paper we survey the policy mechanisms currently in place globally to incentivise CDR, together with an estimate of what different mechanisms are paying per tonne of CDR, and how those costs are currently distributed. Incentive structures are grouped into three structures, market-based, public procurement, and fiscal mechanisms. We find the majority of mechanisms currently in operation are underresourced and pay too little to enable a portfolio of CDR that could support achievement of net zero. The majority of mechanisms are concentrated in market-based and fiscal structures, specifically carbon markets and subsidies. While not primarily motivated by CDR, mechanisms tend to support established afforestation and soil carbon sequestration methods. Mechanisms for geological CDR remain largely underdeveloped relative to the requirements of modelled net zero scenarios. Commercialisation pathways for CDR require suitable policies and markets throughout the projects development cycle. Discussion and investment in CDR has tended to focus on technology development. Our findings suggest that an equal or greater emphasis on policy innovation may be required if future requirements for CDR are to be met. This study can further support research and policy on the identification of incentive gaps and realistic potential for CDR globally

    A sense of self for power side-channel signatures: instruction set disassembly and integrity monitoring of a microcontroller system

    Get PDF
    Cyber-attacks are on the rise, costing billions of dollars in damages, response, and investment annually. Critical United States National Security and Department of Defense weapons systems are no exception, however, the stakes go well beyond financial. Dependence upon a global supply chain without sufficient insight or control poses a significant issue. Additionally, systems are often designed with a presumption of trust, despite their microelectronics and software-foundations being inherently untrustworthy. Achieving cybersecurity requires coordinated and holistic action across disciplines commensurate with the specific systems, mission, and threat. This dissertation explores an existing gap in low-level cybersecurity while proposing a side-channel based security monitor to support attack detection and the establishment of trusted foundations for critical embedded systems. Background on side-channel origins, the more typical side-channel attacks, and microarchitectural exploits are described. A survey of related side-channel efforts is provided through side-channel organizing principles. The organizing principles enable comparison of dissimilar works across the side-channel spectrum. We find that the maturity of existing side-channel security monitors is insufficient, as key transition to practice considerations are often not accounted for or resolved. We then document the development, maturation, and assessment of a power side-channel disassembler, Time-series Side-channel Disassembler (TSD), and extend it for use as a security monitor, TSD-Integrity Monitor (TSD-IM). We also introduce a prototype microcontroller power side-channel collection fixture, with benefits to experimentation and transition to practice. TSD-IM is finally applied to a notional Point of Sale (PoS) application for proof of concept evaluation. We find that TSD and TSD-IM advance state of the art for side-channel disassembly and security monitoring in open literature. In addition to our TSD and TSD-IM research on microcontroller signals, we explore beneficial side-channel measurement abstractions as well as the characterization of the underlying microelectronic circuits through Impulse Signal Analysis (ISA). While some positive results were obtained, we find that further research in these areas is necessary. Although the need for a non-invasive, on-demand microelectronics-integrity capability is supported, other methods may provide suitable near-term alternatives to ISA

    耐ソフトエラーラッチにおける欠陥の分析、検出及び評価に関する研究

    Get PDF
    The development of modern integrated circuits (ICs) has greatly changed the life of humankind. Nowadays, IC s are also indispensable to mission-critical applications, such as medical devices, autonomous cars, aircraft navigating systems, and satellites. The reliability of these mission-critical applications is a major concern. A soft-error occurring in an IC is a severe threat to its reliability, especially for mission-critical applications. The continuous trend of shrinking technology feature sizes makes modern ICs more and more vulnerable to soft errors. Soft-errors are caused by radiation particles striking an IC and generating current pulses to disturb its functionality. A soft-error can cause data corruption and may eventually lead to system failure s If a soft-error occurs in an operational medical device during surgery, it may cause a malfunction of this device and interrupt the surgery process. A soft-error may change the control data of an autonomous car which may lead to an accident. A soft-error may corrupt the aircraft navigating systems. No one would take the chance to let it happen even though malfunction s caused by soft errors can be solved by resetting these devices. Because reset takes time and severe results may happen during the resetting. If a soft-error causes a malfunction in the control system of a satellite, it may not be able to maintain its height and eventually burn up as it falls into the Earth’s atmosphere. Hence, it is important to protect ICs from soft errors. Many soft-error tolerance methods have been proposed to protect ICs against soft-errors. In an IC, memory elements and storage elements (e.g., latches and flip flops) are the most vulnerable to soft-errors, and data stored in them are crucial to the operation of a circuit. Error correction codes (ECCs) can be u sed to protect memories. Register-level soft-error tolerance methods can be used to detect soft-errors in latches by using parity checking and correct them by resetting. Hardened designs protect latches against soft-errors by using redundant feedback loops to store the same input data and using a voter to select the correct output. The advantage of using hardened designs is that they can prevent soft-errors from reaching outputs while ECCs and register-level soft-error tolerance methods must detect soft-errors and then correct them by restoring the data. For protecting storage elements in mission-critical applications, hardened latch design is the best option because it has high reliability and can save the resetting time. Many state-of-the-art hardened latch designs have been proposed to tolerate soft errors and they are believed to have good soft-error tolerability. Defects (physical flaws due to imperfect production (production defects) and physical changes caused by aging effects after a long operation time (aging-related defects) can also cause a malfunction of a circuit and cause a system failure eventually. Different from the temporal state change of a circuit caused by soft errors, defects are permanent damages to a circuit and can disturb the behavior of a circuit from its desired manner. Defects in storage elements should be detected to make sure a system/device operating correctly and stably. Scan test is a commonly used defect detection method, which connects reconfigured storage elements to form a shift register with external access and the internal states of these storage elements can be easily controlled and checked. However, the impact of defects on existing state of the art hardened latch design has not been considered. This impact requires consideration because added redundancy in hardened latch designs can not only mask soft-errors but also mask the effects of defects and it can lead to two serious problems: Problem-1 (Low Testability): Production defects in hardened latch designs are difficult to detect with conventional scan tests, in which the observability (an important metric to evaluate a circuit’s testability) of defects in hardened latch designs can be greatly reduced. Therefore, existing state-of-the-art hardened latches have low observability and thus low testability. Furthermore, defects that escaped the production test (undetected defects) may become more and more serious and cause a system failure eventually. Problem-2 (Low Soft-Error Tolerability): Undetected defects and aging-related defects can make hardened latch designs vulnerable to soft-errors while defect-free ones do not. The soft-error tolerability of hardened latch designs may be compromise d by undetected defects or aging related defects. This research is the first to consider Problem-1 of low testability of hardened latches and Problem-2 of defects reducing the reliability of hardened latches. Furthermore, this research is the first to pro pose a comprehensive solution to solve these two problems with the following five major contributions: Contribution-1: A first of its kind metric for quantifying the impact of defects on hardened latches, called Post-Test Vulnerability Factor (PTVF). It is used to analyze the residual soft-error tolerability of hardened latches after testing. Problem-2 is solved by this first major contribution. Contribution-2: A novel design called Scan-Test-Aware Hardened Latch (STAHL) that provides the highest defect coverage in comparison with all existing hardened latches. Problem-1 is solved by using STAHL to build a scan c ell to perform a scan test. Contribution-3: A novel scan test procedure is proposed to solve Problem-1 by fully testing the STAHL based scan cell. Contribution-4: A novel High-Performance Scan-Test-Aware Hardened Latch (HP-STAHL) design can also solve Problem-1 and has similar defect coverage as STAHL but has lower power consumption and higher propagation speed. Contribution-5: A novel scan test procedure is proposed to fully test the HP STAHL-based scan cell to solve Problem-1. Comprehensive simulation results demonstrate the accuracy of the PTVF metric and the effectiveness of the STAHL-based scan test and HP-STAHL-based scan test. As the first comprehensive study bridging the gap between hardened latch design s and IC testing, the findings of this research are expected to significantly improve the soft-error-related reliability of IC designs for mission-critical applications. Furthermore, the two proposed hardened latches and the scan test procedures can not only be use d to detect defects after production but also can be applied to detect aging related defects in the field through performing built-in self-test (BIST). In Chapter 1, an example is introduced to indicate Problem-1 and Problem-2. Chapter 2 shows the background information of soft-errors and defects. Chapter 3 shows some typical soft-error mitigation methods and details of a scan test. Chapter 4 describes the detailed information of PTVF Contribution-1). Chapter 5 shows the structure of STAHL (Contribution-2) and Chapter 6 shows the scan test procedure of testing the STAHL-based scan cell (Contribution-3). Chapter 7 shows the structure of HP-STAHL (Contribution-4) and Chapter 8 shows the scan test procedure of testing the HP-STAHL based scan cell (Contribution-5). Chapter 9 shows the experimental results of comparing STAHL and HP-STAHL with state-of-the-art hardened latch designs. Chapter 10 concludes this thesis.九州工業大学博士学位論文 学位記番号:情工博甲第371号 学位授与年月日:令和4年9月26日1. Introduction|2. Background|3. Related Works|4. Post-Test Vulnerability Factor (PTVF)|5. Scan-Test Aware Hardened Latch (STAHL)|6. Scan Test Based on STAHL|7. High Performance Scan-Test-Aware Hardened Latch (HP STAHL)|8. Scan Test Based on HP STAHL|9. Experimental Evaluation|10. Conclusions and Future Works九州工業大学令和4年

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting
    corecore