4 research outputs found

    Extending Hardware Based Mandatory Access Controls to Multicore Architectures

    Get PDF
    Memory based vulnerabilities have plagued the computer industry since the release of the Morris worm twenty years ago. In addition to buffer overflow attacks like the Morris worm, format strings, ret-libC, and heap double free() viruses have been able to take advantage of pervasive programming errors. A recent example is the unspecified buffer overflow vulnerability present in Mozilla Firefox 3.0. From the past one can learn that these coding mistakes are not waning. A solution is needed that can close off these security shortcomings while still being of minimal impact to the user. Antivirus software makers continuously overestimate the lengths that the everyday user is willing to go to in order to protect his or her system. The ideal protection scheme will be of little or no inconvenience to the user. A technique that fits this niche is one that is built into the hardware. Typical users will never know of the added protection they\u27re receiving because they are getting it by default. Unlike the NX bit technology in modern x86 machines, the correct solution should be mandatory and uncircumventable by user programs. The idea of marking memory as non-executable is maintained but in this case the granularity is refined to the byte level. The standard memory model is extended by one bit per byte to indicate whether the data stored there is trusted or not. While this design is not unique in the architecture field, the issues that arise from multiple processing units in a single system causes complications. Therefore, the purpose of this work is to investigate hardware based mandatory access control mechanisms that work in the multicore paradigm. As a proof of concept, a buffer overflow style attack has been crafted that results in an escalation of privileges for a nonroot user. While effective against a standard processor, a CPU modified to include byte level tainting successfully repels the attack with minimal performance overhead

    Software-based Transparent And Comprehensive Control-flow Error Detection

    No full text
    Shrinking microprocessor feature size and growing transistor density may increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software-based techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes two new software-based comprehensive control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the branch-error categories. We implemented the techniques in our dynamic binary translator so that the techniques can be applied to existing x86 binaries transparently. We compared our new techniques with the previous ones and we show that our methods cover more errors while has similar performance overhead. © 2006 IEEE.333345Alkhalifa, Z., Nair, V.S.S., Krishnamurthy, N., Abraham, J.A., Design and evaluation of system-level checks for on-line control-flow error detection (1999) IEEE Trans. Parallel Distrib. Syst, 10, pp. 627-641. , JuneAndo, H., A 1.3ghz fifth generation sparc64 microprocessors (2003) Proc. IEEE International Solid-State Circuits Conference. (ISSCC 03), pp. 246-247. , IEEE PressBaumann, R., Soft errors in commercial semiconductor technology: Overview and scaling trends (2002) IEEE 2002 Reliability Physics Symp. Tutorial Notes, Reliability Fundamentals, pp. 1210101-1210114. , IEEE PressChandra, S., Chen, P.M., How fail-stop are faulty programs? (1998) Proceedings of the 1998 Symposium on Fault-Tolerant Computing (FTCS), , JuneConstantinescu, C., Trends and challenges in vlsi circuit reliability (2003) IEEE Micro, 23, pp. 14-19. , Jul.-AugIntel@ Extended Memory 64 Technology Software Developer's GuideIA-32 Intel@ Architecture Software Developer's ManualMukherjee, S.S., Emer, J., Reinhardt, S.K., The soft error problem: An architectural perspective (2005) Proceeding of the 11th Int'l Symposium on High- Performance Computer Architecture (HPCA-11), pp. 243-247. , 12-16 FebMichel, T., Leveugle, R., Saucier, G., A new approach to control-flow checking without program modification (1991) Proc. FTCS-21, pp. 334-341Namjoo, M., CERBERUS-16: An architecture for a general purpose watchdog processor (1983) Proc. Symposium on Fault-Tolerant Computing, pp. 216-219O'Gorman, T.J., Ross, J.M., Taber, A.H., Ziegler, J.F., Muhlfeld, H.P., Montrose, I.C.J., Curtis, H.W., Walsh, J.L., Field testing for cosmic ray soft errors in semiconductor memories (1996) IBM Journal of Research and Development, pp. 41-49. , JanuaryOh, N., Shirvani, P.P., McCluskey, E.J., Controlflow checking by software signatures (2002) IEEE Transactions on Reliability, 51 (2), pp. 111-122. , MarchReis, G.A., Chang, J., Vachharajani, N., Rangan, R., August, D.I., SWIFT: Software implemented fault tolerance (2005) Proceedings of the Third International Symposium on Code Generation and Optimization (CGO), , MarchShivakumar, P., Kistler, M., Keckler, S.W., Burger, D., Alvisi, L., Modeling the effect of technology trends on the soft error rate of combinational logic (2002) Proceedings of the 2002 International Conference on Dependable Systems and Networks, pp. 389-399. , JuneSaxena, N.R., McCluskey, E.J., Control-flow checking using watchdog assists and extended-precision checksums (1990) IEEE Transactions on Computers, 39 (4), pp. 554-559. , Ap

    Hardware Error Detection Using AN-Codes

    Get PDF
    Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially
    corecore