63 research outputs found

    CAREER: Architectural Support for Parallel Execution as a Continuum of Transactions (ASPECT)

    Get PDF
    Issued as final reportNational Science Foundation (U.S.

    Understanding, Alleviating and Exploiting Electro-Magnetic Side-Channel Signals

    No full text
    Presented on September 16, 2016 at 12:00 p.m. in the Microelectronics Research Center, room 102A/B.Milos Prvulovic, Ph.D., is a professor in the School of Computer Science, College of Computing at the Georgia Institute of Technology. His research focuses on hardware and software support for program monitoring, debugging, and security. His research of side-channel emmanations and side-channel attacks has led to widespread interest from professional societies, the media and additional reserach sponsors -- most recently attracting a $9.4 million award from the Defense Advanced Research Projects Agency (DARPA) for continued study. In general, the goal of his research is to make both hardware and software more reliable and secure.Runtime: 69:40 minutesA side-channel attack is an attack that exploits the low-power electronic signals a device emits even when it’s not connected to the Internet or a network. Such signals can leak sensitive data used in a computational task. Among side channels, the electromagnetic emanations are particularly interesting because they do not require any contact with the target device in order to read potentially sensitive and private data. While side-channel attacks can be conducted without understanding the relationship between computation and electromagnetic emanations, prevention is usually cost-, overhead-, power- and/or weight-intensive. In this talk, I will describe our work to understand the execution-emanations relationship, how this research can be used to "surgically" alleviate side-channel vulnerabilities, and even how it enables new beneficial uses of side-channel information

    Split Temporal/Spatial Cache: A Survey and Reevaluation of Performance

    No full text
    The purpose of this paper is to reevaluate the performance of the Split Temporal/Spatial (STS) cache. First we briefly survey the split cache designs found in the open literature. Then we propose quantitative definitions for both temporal and spatial locality. These definitions can be used to represent each split cache design (or any other method for optimized locality exploitation) as a line in a temporal-spatial locality plane. Then we explain the particular process used to evaluate the STS cache design, and finally we present the results of that evaluation. We conclude with possible improvements pointed to by our evaluation results. Introduction In recent years, the speed gap between dynamic memories and microprocessors has been steadily increasing. For this reason, a lot of effort is invested into finding ways to reduce or hide memory latency. One of the oldest and most powerful ways of reducing the memory latency is through use of cache memories. Caches exploit the locality of d..

    ReVive: Cost-Effective Architectural Support for Rollback Recovery in Shared-Memory Multiprocessors

    No full text
    This paper presents ReVive, a novel general-purpose rollback recovery mechanism for shared-memory multiprocessors. ReVive carefully balances the conflicting requirements of availability, performance, and hardware cost. ReVive performs checkpointing, logging, and distributed parity protection, all memory-based. It enables recovery from a wide class of errors, including the permanent loss of an entire node. To maintain high performance, ReVive includes specialized hardware that performs frequent operations in the background, such as log and parity updates. To keep the cost low, more complex checkpointing and recovery functions are performed in software, while the hardware modifications are limited to the directory controllers of the machine. Our simulation results on a 16-processor system indicate that the average error-free execution time overhead of using ReVive is only 6.3%, while the achieved availability is better than 99.999% even when the errors occur as often as once per day

    Architectural Support for Reliable Parallel Computing

    No full text
    108 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2003.ReEnact improves reliability of multi-threaded software by providing an effective data-race debugging support. ReEnact extends thread-level speculation (TLS) mechanisms to roll back the buggy execution and repeat it as many times as necessary until the bug is fully characterized. These incremental re-executions are deterministic even in multi-threaded codes. The specific implementation of ReEnact detailed and evaluated in this thesis targets data-races in multi-threaded programs. Our experiments using SPLASH-2 applications show that ReEnact is very effective at detecting and characterizing data-race bugs automatically on the fly. This we consider the most valuable contribution of ReEnact. Moreover, in many cases, ReEnact also repairs the bug. Last but not least, ReEnact is fully compatible with production runs: the slowdown of race-free execution is on average only 5.8%.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    Architectural Support for Reliable Parallel Computing

    No full text
    108 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2003.ReEnact improves reliability of multi-threaded software by providing an effective data-race debugging support. ReEnact extends thread-level speculation (TLS) mechanisms to roll back the buggy execution and repeat it as many times as necessary until the bug is fully characterized. These incremental re-executions are deterministic even in multi-threaded codes. The specific implementation of ReEnact detailed and evaluated in this thesis targets data-races in multi-threaded programs. Our experiments using SPLASH-2 applications show that ReEnact is very effective at detecting and characterizing data-race bugs automatically on the fly. This we consider the most valuable contribution of ReEnact. Moreover, in many cases, ReEnact also repairs the bug. Last but not least, ReEnact is fully compatible with production runs: the slowdown of race-free execution is on average only 5.8%.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    KIMA: Hybrid Checkpointing for Recovery from a Wide Range of Errors and Detection Latencies

    Get PDF
    Full system reliability is a problem that spans multiple levels of the software/hardware stack. The normal execution of a program in a system can be disrupted by multiple factors, ranging from transient errors in a processor and software bugs, to permanent hardware failures and human mistakes. A common method for recovering from such errors is the creation of checkpoints during the execution of the program, allowing the system to restore the program to a previous error-free state and resume execution. Different causes of errors, though, have different occurrence frequencies and detection latencies, requiring the creation of multiple checkpoints at different frequencies in order to maximize the availability of the system. In this paper we present KIMA, a novel checkpointing creation and management technique that combines efficiently the existing undo-log and redo-log checkpointing approaches, reducing the overall bandwidth requirements to both the memory and the hard disk. KIMA establishes DRAM-based undo-log checkpoints every 10ms, then leverages the undo-log metadata and checkpointed data to establish redo-log checkpoints every 1 second in non-volatile memory (such as PCM). Our results show that KIMA incurs average overheads of less than 1% while enabling efficient recovery from both transient and hard errors that have a variety of detection latencies
    • …
    corecore