7 research outputs found

    Reinventing Scheduling for Multicore Systems

    Get PDF
    High performance on multicore processors requires that schedulers be reinvented. Traditional schedulers focus on keeping execution units busy by assigning each core a thread to run. Schedulers ought to focus, however, on high utilization of on-chip memory, rather than of execution cores, to reduce the impact of expensive DRAM and remote cache accesses. A challenge in achieving good use of on-chip memory is that the memory is split up among the cores in the form of many small caches. This paper argues for a form of scheduling that assigns each object and its operations to a specific core, moving a thread among the cores as it uses different objects

    AS-COMA: An adaptive hybrid shared memory Architecture

    Get PDF
    technical reportScalable shared memory multiprocessors traditionally use either a cache coherent nonuniform memory access (CC-NUMA) or simple cache-only memory architecture (S-COMA) memory architecture. Recently, hybrid architectures that combine aspects of both CC-NUMA and S-COMA have emerged. In this paper, we present two improvements over other hybrid architectures. The first improvement is a page allocation algorithm that prefers S-COMA pages at low memory pressures. Once the local free page pool is drained, additional pages are mapped in CC-NUMA mode until they suffer sufficient remote misses to warrant uprading to S-COMA mode. The second improvement is a page replacement algorithm that dynamically backs off the rate of page remappings from CC-NUMA to S-COMA mode at high memory pressure. This design dramatically reduces the amount of kernel overhead and the number of induced cold misses caused by needless thrashing of the page cache. The resulting hybrid architecture is called adaptive S-COMA (AS-COMA). AS-COMA exploits the best of S-COMA and CC-NUMA, performing like as S-COMA machine at low memory pressure and like a CC-NUMA under almost all conditions, and outperforms other hybrid architectures by up to 17% at low memory pressure and up to 90% at high memory pressure

    Multithreading opportunities for program optimizations

    Get PDF
    The introduction of Multiprocessor On Chip (CMP) led to a substantial reformulation of the Moore law stating that the number of cores in a single chip doubles every one year and a half. The tech boom related to CMP gave a strong impulse to parallel program design diminishing its ``gap'' with parallel architectures. Nowadays a leading trend related to high performance products is represented by CMP with multithreading CPU nodes. Basically the CPU multithreading feature tries to overcome the underutilization of superscalar processors, due to the lack of exploitable instruction level parallelism (ILP), allowing the simultaneous processing of different programs during the same time slot. In multithreading architectures a thread is a concurrent computational entity supported directly at firmware level (these threads are usually called hardware threads). Multithreading technology opens a broad range of possible optimizations that can be applied to improve the performance of sequential and parallel applications. This thesis treat four possible optimization targeted for multithreading architectures: Speculative Precomputation, Threaded Multipath Execution, Speculative Multithreading and Communication threads. L'introduzione dei Multiprocessor On Chip (CMP) ha portato ad una sostanziale riformulazione della legge di Moore la quale afferma che il numero di cores in un singolo chip raddoppia ogni anno e mezzo. Il boom tecnologico relativo ai CMP ha dato un grande impulso al design relativo alla programmazione parallela diminuendo il gap con le architetture parallele. Allo stato attuale delle cose, un trend prominente relativo ai prodotti di high performance computing è rappresentato da CMP con nodi caratterizzati da hardware multithreading. Questa tecnologia prova a risolvere il sottoutilizzo di processori superscalari, dovuto alla mancanza di ILP (instruction level parallelism), permettendo la computazione simultanea di diversi programmi durante lo stesso time slot La tecnologia multithreading ha aperto un ampio spettro di possibili ottimizzazioni che possono essere utilizzate al fine di migliorare le performance di applicazioni sequenziali e parallele. Questa tesi tratta quattro possibili ottimizzazioni indirizzate per architetture multithreading: Speculative Precomputation (Helper Thread), Threaded Multipath Execution, Speculative Multithreading and Communication Threads

    Studies of Windows NT Performance using Dynamic Execution Traces

    No full text
    We studied two aspects of the performance of Windows NT: processor bandwidth requirements for memory accesses in a uniprocessor system running commercial and benchmark applications, and locking behavior of a commercial database on a small-scale multiprocessor. Our studies are based on full dynamic execution traces of the systems, which include all instructions executed by the operating system and applications over periods of a few seconds (enough time to allow for significant computation). The traces were obtained on Alpha PCs, using a new software tool called PatchWrx that takes advantage of the Alpha architecture's PAL-code layer to implement efficient, comprehensive system tracing. Because the Alpha version of Windows NT uses substantially the same code base as other versions, and therefore executes nearly the same sequence of calls, basic blocks, and data structure accesses, we believe our conclusions are relevant for non-Alpha systems as well. This paper describes our performance studies and interesting aspects of PatchWrx. We conclud

    Studies of Windows NT performance using dynamic execution traces

    No full text

    Abstract Studies of Windows NT Performance using Dynamic Execution Traces

    No full text
    We studied two aspects of the performance of Windows NT£¥ ¤ : processor bandwidth requirements for memory accesses in a uniprocessor system running commercial and benchmark applications, and locking behavior of a commercial database on a small-scale multiprocessor. Our studies are based on full dynamic execution traces of the systems, which include all instructions executed by the operating system and applications over periods of a few seconds (enough time to allow for significant computation). The traces were obtained on Alpha PCs, using a new software tool called PatchWrx that takes advantage of the Alpha architecture’s PAL-code layer to implement efficient, comprehensive system tracing. Because the Alpha version of Windows NT uses substantially the same code base as other versions, and therefore executes nearly the same sequence of calls, basic blocks, and data structure accesses, we believe our conclusions are relevant for non-Alpha systems as well. This paper describes our performance studies and interesting aspects of PatchWrx. We conclude from our studies that processor bandwidth can be a first-order bottleneck to achieving good performance. This is particularly apparent when studying commercial benchmarks. Operating system code and data structures contribute disproportionately to the memory access load. We also found that operating system software lock contention was a factor preventing the database benchmark from scaling up on the small multiprocessor, and that the cache coherence protocol employed by the machine introduced more cache interference than necessary
    corecore