165 research outputs found

    ๋ฉ”๋ชจ๋ฆฌ ๋ณดํ˜ธ๋ฅผ ์œ„ํ•œ ๋ณด์•ˆ ์ •์ฑ…์„ ์‹œํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ์ฝ”๋“œ ๋ณ€ํ™˜ ๊ธฐ์ˆ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ๋ฐฑ์œคํฅ.Computer memory is a critical component in computer systems that needs to be protected to ensure the security of computer systems. It contains security sensitive data that should not be disclosed to adversaries. Also, it contains the important data for operating the system that should not be manipulated by the attackers. Thus, many security solutions focus on protecting memory so that sensitive data cannot be leaked out of the computer system or on preventing illegal access to computer data. In this thesis, I will present various code transformation techniques for enforcing security policies for memory protection. First, I will present a code transformation technique to track implicit data flows so that security sensitive data cannot leak through implicit data flow channels (i.e., conditional branches). Then I will present a compiler technique to instrument C/C++ program to mitigate use-after-free errors, which is a type of vulnerability that allow illegal access to stale memory location. Finally, I will present a code transformation technique for low-end embedded devices to enable execute-only memory, which is a strong security policy to protect secrets and harden the computing device against code reuse attacks.์ปดํ“จํ„ฐ ๋ฉ”๋ชจ๋ฆฌ๋Š” ์ปดํ“จํ„ฐ ์‹œ์Šคํ…œ์˜ ๋ณด์•ˆ์„ ์œ„ํ•ด ๋ณดํ˜ธ๋˜์–ด์•ผ ํ•˜๋Š” ์ค‘์š”ํ•œ ์ปดํฌ๋„ŒํŠธ์ด๋‹ค. ์ปดํ“จํ„ฐ ๋ฉ”๋ชจ๋ฆฌ๋Š” ๋ณด์•ˆ์ƒ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ด๊ณ  ์žˆ์„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ์‹œ์Šคํ…œ์˜ ์˜ฌ๋ฐ”๋ฅธ ๋™์ž‘์„ ์œ„ํ•ด ๊ณต๊ฒฉ์ž์— ์˜ํ•ด ์กฐ์ž‘๋˜์–ด์„œ๋Š” ์•ˆ๋˜๋Š” ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ ๊ฐ’๋“ค์„ ์ €์žฅํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋งŽ์€ ๋ณด์•ˆ ์†”๋ฃจ์…˜์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๋ณดํ˜ธํ•˜์—ฌ ์ปดํ“จํ„ฐ ์‹œ์Šคํ…œ์—์„œ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์œ ์ถœ๋˜๊ฑฐ๋‚˜ ์ปดํ“จํ„ฐ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ถˆ๋ฒ•์ ์ธ ์ ‘๊ทผ์„ ๋ฐฉ์ง€ํ•˜๋Š” ๋ฐ ์ค‘์ ์„ ๋‘”๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฉ”๋ชจ๋ฆฌ ๋ณดํ˜ธ๋ฅผ ์œ„ํ•œ ๋ณด์•ˆ ์ •์ฑ…์„ ์‹œํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ์ฝ”๋“œ ๋ณ€ํ™˜ ๊ธฐ์ˆ ์„ ์ œ์‹œํ•œ๋‹ค. ๋จผ์ €, ํ”„๋กœ๊ทธ๋žจ์—์„œ ๋ถ„๊ธฐ๋ฌธ์„ ํ†ตํ•ด ๋ณด์•ˆ์— ๋ฏผ๊ฐํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์œ ์ถœ๋˜์ง€ ์•Š๋„๋ก ์•”์‹œ์  ๋ฐ์ดํ„ฐ ํ๋ฆ„์„ ์ถ”์ ํ•˜๋Š” ์ฝ”๋“œ ๋ณ€ํ™˜ ๊ธฐ์ˆ ์„ ์ œ์‹œํ•œ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ C / C ++ ํ”„๋กœ๊ทธ๋žจ์„ ๋ณ€ํ™˜ํ•˜์—ฌ use-after-free ์˜ค๋ฅ˜๋ฅผ ์™„ํ™”ํ•˜๋Š” ์ปดํŒŒ์ผ๋Ÿฌ ๊ธฐ์ˆ ์„ ์ œ์‹œํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ค‘์š” ๋ฐ์ดํ„ฐ๋ฅผ ๋ณดํ˜ธํ•˜๊ณ  ์ฝ”๋“œ ์žฌ์‚ฌ์šฉ ๊ณต๊ฒฉ์œผ๋กœ๋ถ€ํ„ฐ ๋””๋ฐ”์ด์Šค๋ฅผ ๊ฐ•ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ๋ณด์•ˆ ์ •์ฑ…์ธ ์‹คํ–‰ ์ „์šฉ ๋ฉ”๋ชจ๋ฆฌ(execute-only memory)๋ฅผ ์ €์‚ฌ์–‘ ์ž„๋ฒ ๋””๋“œ ๋””๋ฐ”์ด์Šค์— ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•œ ์ฝ”๋“œ ๋ณ€ํ™˜ ๊ธฐ์ˆ ์„ ์ œ์‹œํ•œ๋‹ค.1 Introduction 1 2 Background 4 3 A Hardware-based Technique for Efficient Implicit Information Flow Tracking 8 3.1 Introduction 8 3.2 Related Work 10 3.3 Our Approach for Implicit Flow Tracking 12 3.3.1 Implicit Flow Tracking Scheme with Program Counter Tag 12 3.3.2 tP C Management Technique 15 3.3.3 Compensation for the Untaken Path 20 3.4 Architecture Design of IFTU 22 3.4.1 Overall System 22 3.4.2 Tag Computing Core 24 3.5 Performance and Area Analysis 26 3.6 Security Analysis 28 3.7 Summary 30 4 CRCount: Pointer Invalidation with Reference Counting to Mitigate Useafter-free in Legacy C/C++ 31 4.1 Introduction 31 4.2 Related Work 36 4.3 Threat Model 40 4.4 Implicit Pointer Invalidation 40 4.4.1 Invalidation with Reference Counting 40 4.4.2 Reference Counting in C/C++ 42 4.5 Design 44 4.5.1 Overview 45 4.5.2 Pointer Footprinting 46 4.5.3 Delayed Object Free 50 4.6 Implementation 53 4.7 Evaluation 56 4.7.1 Statistics 56 4.7.2 Performance Overhead 58 4.7.3 Memory Overhead 62 4.8 Security Analysis 67 4.8.1 Attack Prevention 68 4.8.2 Security considerations 69 4.9 Limitations 69 4.10 Summary 71 5 uXOM: Efficient eXecute-Only Memory on ARM Cortex-M 73 5.1 Introduction 73 5.2 Background 78 5.2.1 ARMv7-M Address Map and the Private Peripheral Bus (PPB) 78 5.2.2 Memory Protection Unit (MPU) 79 5.2.3 Unprivileged Loads/Stores 80 5.2.4 Exception Entry and Return 80 5.3 Threat Model and Assumptions 81 5.4 Approach and Challenges 82 5.5 uXOM 85 5.5.1 Basic Design 85 5.5.2 Solving the Challenges 89 5.5.3 Optimizations 98 5.5.4 Security Analysis 99 5.6 Evaluation 100 5.6.1 Runtime Overhead 103 5.6.2 Code Size Overhead 106 5.6.3 Energy Overhead 107 5.6.4 Security and Usability 107 5.6.5 Use Cases 108 5.7 Discussion 110 5.8 Related Work 111 5.9 Summary 113 6 Conclusion and Future Work 114 6.1 Future Work 115 Abstract (In Korean) 132 Acknowlegement 133Docto

    Bit-vector Support in Z3-str2 Solver and Automated Exploit Synthesis

    Get PDF
    Improper string manipulations are an important cause of software defects, which make them a target for program analysis by hackers and developers alike. Symbolic execution based program analysis techniques that systematically explore paths through string-intensive programs require reasoning about string and bit-vector constraints cohesively. The current state of the art symbolic execution engines for programs written in C/C++ languages track constraints on a bit-level and use bit-vector solver to reason about the collected path constraints. However, string functions incur high-performance penalties and lead to path explosion in the symbolic execution engine. The current state of the art string solvers are written primarily for the analysis of web applications with underlying support for the theory of strings and integers, which limits their use in the analysis of low-level programs. Therefore, we designed a decision procedure for the theory of strings and bit-vectors in Z3-str2, a decision procedure for strings and integers, to efficiently solve word equations and length functions over bit-vectors. The new theory combination has a significant role in the detection of integer overflows and memory corruption vulnerabilities associated with string operations. In addition, we introduced a new search space pruning technique for string lengths based on a binary search approach, which enabled our decision procedure to solve constraints involving large strings. We evaluated our decision procedure on a set of real security vulnerabilities collected from Common Vulnerabilities and Exposures (CVE) database and compared the result against the Z3-str2 string-integer solver. The experiments show that our decision procedure is orders of magnitude faster than Z3-str2 string-integer. The techniques we developed have the potential to dramatically improve the efficiency of symbolic execution of string-intensive programs. In addition to designing and implementing a string bit-vector solver, we also addressed the problem of automated remote exploit construction. In this context, we introduce a practical approach for automating remote exploitation using information leakage vulnerability and show that current protection schemes against control-flow hijack attacks are not always very effective. To demonstrate the efficacy of our technique, we performed an over-the-network format string exploitation followed by a return-to-libc attack against a pre-forking concurrent server to gain remote access to a shell. Our attack managed to defeat various protections including ASLR, DEP, PIE, stack canary and RELRO

    Enforcing C++ type integrity with fast dynamic casting, member function protections and an exploration of C++ beneath the surface

    Get PDF
    The C++ type system provides a programmer with modular class features and inheritance capabilities. Upholding the integrity of all class types, known as type-safety, is paramount in preventing type vulnerabilities and exploitation. However, type confusion vulnerabilities are all too common in C++ programs. The lack of low-level type-awareness creates an environment where advanced exploits, like counterfeit object-orientated programming (COOP), can flourish. Although type confusion and COOP exist in different research fields, they both take advantage of inadequate enforcement of type-safety. Most type confusion defence research has focused on type inclusion testing, with varying degrees of coverage and performance overheads. COOP defences, on the other hand, have predominantly featured control flow integrity (CFI) defence measures, which until very recently, were thought to be sound. We investigate both of these topics and challenge prevailing wisdom, arguing that: 1. optimised dynamic casting is better suited to preventing type confusion and 2. enforcing type integrity may be the only defence against COOP. Type confusion vulnerabilities are often the result of substituting dynamic casting with an inappropriate static casting method. Dynamic casting is often avoided due to memory consumption and run-time overheads, with some developers turning off run-time type information (RTTI) altogether. However, without RTTI, developers lose not only secure casting but virtual inheritance as well. We argue that improving the performance of dynamic casting can make it a viable option for preventing type confusion vulnerabilities. In this thesis, we present MemCast, a memoising wrapper for the dynamic cast operator that increases its speed to that of a dynamic dispatch. A new variant of the COOP exploit (COOPLUS) has identified a weakness in almost all modern, C++-semantic-aware CFI defences. The weakness is that they allow derived class functions to be invoked using corrupted base class instances, specifically where an attacker replaces the object's virtual pointer with one from a derived type object. A CFI defence overestimates the set of target functions at a dispatch site to cover all possible control-flow paths of a polymorphic object. Thus COOPLUS takes advantage of the lack of type integrity between related types at dispatch sites. In this thesis, we argue that CFI is an unsuitable defence against COOPLUS, and type integrity must be applied. Hence we propose a type integrity defence called Member Function Integrity (MFI) that brings type awareness to member functions and prevents any member function from operating on an invalid object type. To understand the low-level techniques deployed in MemCast and our MFI defence policy, one has to appreciate the memory layout of the objects themselves and the conventions used by member functions that operate on them. However, in our research, we did not find adequate introductory literature specific to modern compilers. For this reason, we supplied our own self-contained introduction to low-level object-orientation. This thesis has three contributions: a primer on C++ object layouts, an optimised dynamic casting technique that reduces the casting cost to that of a dynamic dispatch, and a new defence policy proposal (MFI) to mitigate all known COOP exploits

    A Solder-Defined Computer Architecture for Backdoor and Malware Resistance

    Get PDF
    This research is about securing control of those devices we most depend on for integrity and confidentiality. An emerging concern is that complex integrated circuits may be subject to exploitable defects or backdoors, and measures for inspection and audit of these chips are neither supported nor scalable. One approach for providing a โ€œsupply chain firewallโ€ may be to forgo such components, and instead to build central processing units (CPUs) and other complex logic from simple, generic parts. This work investigates the capability and speed ceiling when open-source hardware methodologies are fused with maker-scale assembly tools and visible-scale final inspection. The author has designed, and demonstrated in simulation, a 36-bit CPU and protected memory subsystem that use only synchronous static random access memory (SRAM) and trivial glue logic integrated circuits as components. The design presently lacks preemptive multitasking, ability to load firmware into the SRAMs used as logic elements, and input/output. Strategies are presented for adding these missing subsystems, again using only SRAM and trivial glue logic. A load-store architecture is employed with four clock cycles per instruction. Simulations indicate that a clock speed of at least 64 MHz is probable, corresponding to 16 million instructions per second (16 MIPS), despite the architecture containing no microprocessors, field programmable gate arrays, programmable logic devices, application specific integrated circuits, or other purchased complex logic. The lower speed, larger size, higher power consumption, and higher cost of an โ€œSRAM minicomputer,โ€ compared to traditional microcontrollers, may be offset by the fully open architectureโ€”hardware and firmwareโ€”along with more rigorous user control, reliability, transparency, and auditability of the system. SRAM logic is also particularly well suited for building arithmetic logic units, and can implement complex operations such as population count, a hash function for associative arrays, or a pseudorandom number generator with good statistical properties in as few as eight clock cycles per 36-bit word processed. 36-bit unsigned multiplication can be implemented in software in 47 instructions or fewer (188 clock cycles). A general theory is developed for fast SRAM parallel multipliers should they be needed

    Protecting Systems From Exploits Using Language-Theoretic Security

    Get PDF
    Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience

    Resiliency Mechanisms for In-Memory Column Stores

    Get PDF
    The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems. In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION 1.1 Contributions of this Thesis 1.2 Outline 2 PROBLEM DESCRIPTION AND RELATED WORK 2.1 Reliable Data Management on Reliable Hardware 2.2 The Shift Towards Unreliable Hardware 2.3 Hardware-Based Mitigation of Bit Flips 2.4 Data Management System Requirements 2.5 Software-Based Techniques For Handling Bit Flips 2.5.1 Operating System-Level Techniques 2.5.2 Compiler-Level Techniques 2.5.3 Application-Level Techniques 2.6 Summary and Conclusions 3 ANALYSIS OF CODING TECHNIQUES 3.1 Selection of Error Codes 3.1.1 Hamming Coding 3.1.2 XOR Checksums 3.1.3 AN Coding 3.1.4 Summary and Conclusions 3.2 Probabilities of Silent Data Corruption 3.2.1 Probabilities of Hamming Codes 3.2.2 Probabilities of XOR Checksums 3.2.3 Probabilities of AN Codes 3.2.4 Concrete Error Models 3.2.5 Summary and Conclusions 3.3 Throughput Considerations 3.3.1 Test Systems Descriptions 3.3.2 Vectorizing Hamming Coding 3.3.3 Vectorizing XOR Checksums 3.3.4 Vectorizing AN Coding 3.3.5 Summary and Conclusions 3.4 Comparison of Error Codes 3.4.1 Effectiveness 3.4.2 Efficiency 3.4.3 Runtime Adaptability 3.5 Performance Optimizations for AN Coding 3.5.1 The Modular Multiplicative Inverse 3.5.2 Faster Softening 3.5.3 Faster Error Detection 3.5.4 Comparison to Original AN Coding 3.5.5 The Multiplicative Inverse Anomaly 3.6 Summary 4 BIT FLIP DETECTING STORAGE 4.1 Column Store Architecture 4.1.1 Logical Data Types 4.1.2 Storage Model 4.1.3 Data Representation 4.1.4 Data Layout 4.1.5 Tree Index Structures 4.1.6 Summary 4.2 Hardened Data Storage 4.2.1 Hardened Physical Data Types 4.2.2 Hardened Lightweight Compression 4.2.3 Hardened Data Layout 4.2.4 UDI Operations 4.2.5 Summary and Conclusions 4.3 Hardened Tree Index Structures 4.3.1 B-Tree Verification Techniques 4.3.2 Justification For Further Techniques 4.3.3 The Error Detecting B-Tree 4.4 Summary 5 BIT FLIP DETECTING QUERY PROCESSING 5.1 Column Store Query Processing 5.2 Bit Flip Detection Opportunities 5.2.1 Early Onetime Detection 5.2.2 Late Onetime Detection 5.2.3 Continuous Detection 5.2.4 Miscellaneous Processing Aspects 5.2.5 Summary and Conclusions 5.3 Hardened Intermediate Results 5.3.1 Materialization of Hardened Intermediates 5.3.2 Hardened Bitmaps 5.4 Summary 6 END-TO-END EVALUATION 6.1 Prototype Implementation 6.1.1 AHEAD Architecture 6.1.2 Diversity of Physical Operators 6.1.3 One Concrete Operator Realization 6.1.4 Summary and Conclusions 6.2 Performance of Individual Operators 6.2.1 Selection on One Predicate 6.2.2 Selection on Two Predicates 6.2.3 Join Operators 6.2.4 Grouping and Aggregation 6.2.5 Delta Operator 6.2.6 Summary and Conclusions 6.3 Star Schema Benchmark Queries 6.3.1 Query Runtimes 6.3.2 Improvements Through Vectorization 6.3.3 Storage Overhead 6.3.4 Summary and Conclusions 6.4 Error Detecting B-Tree 6.4.1 Single Key Lookup 6.4.2 Key Value-Pair Insertion 6.5 Summary 7 SUMMARY AND CONCLUSIONS 7.1 Future Work A APPENDIX A.1 List of Golden As A.2 More on Hamming Coding A.2.1 Code examples A.2.2 Vectorization BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES LIST OF LISTINGS LIST OF ACRONYMS LIST OF SYMBOLS LIST OF DEFINITION

    Hardware Error Detection Using AN-Codes

    Get PDF
    Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially

    Hardware Error Detection Using AN-Codes

    Get PDF
    Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially

    Data management system study Final technical report, 15 May 1967 - 31 Mar. 1968

    Get PDF
    Systems design of prototype coherent system to meet data processing requirements of post-Apollo mission
    • โ€ฆ
    corecore