6,453 research outputs found

    Epoch profiles: microarchitecture-based application analysis and optimization

    Get PDF
    The performance of data-intensive applications, when running on modern multi- and many-core processors, is largely determined by their memory access behavior. Its most important contributors are the frequency and latency of off-chip accesses and the extent to which long-latency memory accesses can be overlapped with useful computation or with each other. In this paper we present two methods to better understand application and microarchitectural interactions. An epoch profile is an intuitive way to understand the relationships between three important characteristics: the on-chip cache size, the size of the reorder window of an out-of-order processor, and the frequency of processor stalls caused by long-latency, off-chip requests (epochs). By relating these three quantities one can more easily understand an application’s memory reference behavior and thus significantly reduce the design space. While epoch profiles help to provide insight into the behavior of a single application, developing an understanding of a number of applications in the presence of area and core count constraints presents additional challenges. Epoch-based microarchitectural analysis is presented as a better way to understand the trade-offs for memory-bound applications in the presence of these physical constraints. Through epoch profiling and optimization, one can significantly reduce the multidimensional design space for hardware/software optimization through the use of high-level model-driven techniques

    Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs

    Get PDF
    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.Comment: Version accepted by Comp. Phys. Com

    Hypervisor Integrity Measurement Assistant

    Get PDF
    An attacker who has gained access to a computer may want to upload or modify configuration files, etc., and run arbitrary programs of his choice. We can severely restrict the power of the attacker by having a white-list of approved file checksums and preventing the kernel from loading loading any file with a bad checksum. The check may be placed in the kernel, but that requires a kernel that is prepared for it. The check may also be placed in a hypervisor which intercepts and prevents the kernel from loading a bad file. We describe the implementation of and give performance results for two systems. In one the checksumming, or integrity measurement, and decision is performed by the hypervisor instead of the OS. In the other only the final integrity decision is done in the hypervisor. By moving the integrity check out from the VM kernel it becomes harder for the intruder to bypass the check. We conclude that it is technically possible to put file integrity control into the hypervisor, both for kernels without and with pre-compiled support for integrity measurement
    • 

    corecore