331,496 research outputs found

    Evaluating Software-based Hardening Techniques for General-Purpose Registers on a GPGPU

    Get PDF
    Graphics Processing Units (GPUs) are considered a promising solution for high-performance safety-critical applications, such as self-driving cars. In this application domain, the use of fault tolerance techniques is mandatory to detect or correct faults, since they must work properly even in the presence of faults. GPUs are designed with aggressive technology scaling, which makes them susceptible to faults caused by radiation interference, such as the Single Event Upsets (SEUs), which can lead the system to fail, and that is unacceptable in safety-critical applications. In this paper, we evaluate different software-based hardening techniques developed to detect SEUs in GPUs general-purpose registers and propose optimizations to improve performance and memory utilization. The techniques are implemented in three case-study applications and evaluated in a general-purpose soft-core GPU based on the NVIDIA G80 architecture. A fault injection campaign is performed at register transfer level to assess the fault detection potential of the implemented techniques. Results show that the proposed improvements can be tailored for different scenarios, helping engineers in navigating the design space of hardened GPGPU applications

    elewexe – Building 172B, California Polytechnic State University – San Luis Obispo

    Get PDF
    California Polytechnic State University at San Luis Obispo recently finished constructing a new multimillion student housing project on campus. One of the six buildings, elewexe or 172B, from this housing complex was selected for a full evaluation of the fire and life safety systems using both prescriptive and performance based approaches. The building is a Type II-B four story dormitory with an R-2 occupancy classification. For the prescriptive approach, the building was evaluated against the most current editions of the IBC and relevant NFPA code editions for building requirements, sprinkler design and fire alarm and detection. When looking at the building purely for code compliance, everything complies except for administrative controls. The building is constructed within the specifications of the IBC, the egress is adequate for the building, the sprinkler system has a proper design and a strong water supply without a fire pump, and the fire alarm and detection system meets the requirements listed in NFPA 72. One aspect of the building, that is also found in the other buildings within the housing community, is that there are unenclosed stairwells connecting four stories located within the core of the buildings. This is allowed per the 2013 version of the CBC due to an exception because the building is fully sprinklered. This exception causes a lot of emphasis to be put on the sprinklers operating in the event of a fire. For the performance based analysis is composed of three design fires and a fire model within the report investigates one of the design fire scenarios occurring within the main core and the sprinklers not activating to see if the building design still meets the intent of the code. The results found that in such an event, the available safe egress time (ASET) was less than the required safe egress time (RSET), meaning that in the event that the sprinklers fail in a fire scenario in the core of the building, the building design fails in the intent of the code. Ways to remedy this include increasing administrative controls to limit the amount of fuel in the corridor, increasing testing and maintenance of the sprinkler system to ensure that it functions when needed, an impairment control policy, and adding compartmentation within the building to separate the core stairwell from the rest of the building

    Dynamic Information Flow Tracking on Multicores

    Get PDF
    Dynamic Information Flow Tracking (DIFT) is a promising technique for detecting software attacks. Due to the computationally intensive nature of the technique, prior efficient implementations [21, 6] rely on specialized hardware support whose only purpose is to enable DIFT. Alternatively, prior software implementations are either too slow [17, 15] resulting in execution time increases as much as four fold for SPEC integer programs or they are not transparent [31] requiring source code modifications. In this paper, we propose the use of chip multiprocessors (CMP) to perform DIFT transparently and efficiently. We spawn a helper thread that is scheduled on a separate core and is only responsible for performing information flow tracking operations. This entails the communication of registers and flags between the main and helper threads. We explore software (shared memory) and hardware (dedicated interconnect) approaches to enable this communication. Finally, we propose a novel application of the DIFT infrastructure where, in addition to the detection of the software attack, DIFT assists in the process of identifying the cause of the bug in the code that enabled the exploit in the first place. We conducted detailed simulations to evaluate the overhead for performing DIFT and found that to be 48 % for SPEC integer programs

    An architecture-based dependability modeling framework using AADL

    Full text link
    For efficiency reasons, the software system designers' will is to use an integrated set of methods and tools to describe specifications and designs, and also to perform analyses such as dependability, schedulability and performance. AADL (Architecture Analysis and Design Language) has proved to be efficient for software architecture modeling. In addition, AADL was designed to accommodate several types of analyses. This paper presents an iterative dependency-driven approach for dependability modeling using AADL. It is illustrated on a small example. This approach is part of a complete framework that allows the generation of dependability analysis and evaluation models from AADL models to support the analysis of software and system architectures, in critical application domains

    Rationale and design of the GUIDE-IT study: Guiding Evidence Based Therapy Using Biomarker Intensified Treatment in Heart Failure.

    Get PDF
    OBJECTIVES: The GUIDE-IT (Guiding Evidence Based Therapy Using Biomarker Intensified Treatment in Heart Failure) study is designed to determine the safety, efficacy, and cost-effectiveness of a strategy of adjusting therapy with the goal of achieving and maintaining a target N-terminal pro-B-type natriuretic peptide (NT-proBNP) level of BACKGROUND: Elevations in natriuretic peptide (NP) levels provide key prognostic information in patients with HF. Therapies proven to improve outcomes in patients with HF are generally associated with decreasing levels of NPs, and observational data show that decreases in NP levels over time are associated with favorable outcomes. Results from smaller prospective, randomized studies of this strategy thus far have been mixed, and current guidelines do not recommend serial measurement of NP levels to guide therapy in patients with HF. METHODS: GUIDE-IT is a prospective, randomized, controlled, unblinded, multicenter clinical trial designed to randomize approximately 1,100 high-risk subjects with systolic HF (left ventricular ejection fraction ≤40%) to either usual care (optimized guideline-recommended therapy) or a strategy of adjusting therapy with the goal of achieving and maintaining a target NT-proBNP level of CONCLUSIONS: The GUIDE-IT study is designed to definitively assess the effects of an NP-guided strategy in high-risk patients with systolic HF on clinically relevant endpoints of mortality, hospitalization, quality of life, and medical resource use. (Guiding Evidence Based Therapy Using Biomarker Intensified Treatment in Heart Failure [GUIDE-IT]; NCT01685840)

    High-Integrity Performance Monitoring Units in Automotive Chips for Reliable Timing V&V

    Get PDF
    As software continues to control more system-critical functions in cars, its timing is becoming an integral element in functional safety. Timing validation and verification (V&V) assesses softwares end-to-end timing measurements against given budgets. The advent of multicore processors with massive resource sharing reduces the significance of end-to-end execution times for timing V&V and requires reasoning on (worst-case) access delays on contention-prone hardware resources. While Performance Monitoring Units (PMU) support this finer-grained reasoning, their design has never been a prime consideration in high-performance processors - where automotive-chips PMU implementations descend from - since PMU does not directly affect performance or reliability. To meet PMUs instrumental importance for timing V&V, we advocate for PMUs in automotive chips that explicitly track activities related to worst-case (rather than average) softwares behavior, are recognized as an ISO-26262 mandatory high-integrity hardware service, and are accompanied with detailed documentation that enables their effective use to derive reliable timing estimatesThis work has also been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Enrico Mezzet has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship number IJCI-2016- 27396.Peer ReviewedPostprint (author's final draft

    Optimistic Parallel State-Machine Replication

    Full text link
    State-machine replication, a fundamental approach to fault tolerance, requires replicas to execute commands deterministically, which usually results in sequential execution of commands. Sequential execution limits performance and underuses servers, which are increasingly parallel (i.e., multicore). To narrow the gap between state-machine replication requirements and the characteristics of modern servers, researchers have recently come up with alternative execution models. This paper surveys existing approaches to parallel state-machine replication and proposes a novel optimistic protocol that inherits the scalable features of previous techniques. Using a replicated B+-tree service, we demonstrate in the paper that our protocol outperforms the most efficient techniques by a factor of 2.4 times
    corecore