33 research outputs found

    Intelligent Scheduling and Memory Management Techniques for Modern GPU Architectures

    Get PDF
    abstract: With the massive multithreading execution feature, graphics processing units (GPUs) have been widely deployed to accelerate general-purpose parallel workloads (GPGPUs). However, using GPUs to accelerate computation does not always gain good performance improvement. This is mainly due to three inefficiencies in modern GPU and system architectures. First, not all parallel threads have a uniform amount of workload to fully utilize GPU’s computation ability, leading to a sub-optimal performance problem, called warp criticality. To mitigate the degree of warp criticality, I propose a Criticality-Aware Warp Acceleration mechanism, called CAWA. CAWA predicts and accelerates the critical warp execution by allocating larger execution time slices and additional cache resources to the critical warp. The evaluation result shows that with CAWA, GPUs can achieve an average of 1.23x speedup. Second, the shared cache storage in GPUs is often insufficient to accommodate demands of the large number of concurrent threads. As a result, cache thrashing is commonly experienced in GPU’s cache memories, particularly in the L1 data caches. To alleviate the cache contention and thrashing problem, I develop an instruction aware Control Loop Based Adaptive Bypassing algorithm, called Ctrl-C. Ctrl-C learns the cache reuse behavior and bypasses a portion of memory requests with the help of feedback control loops. The evaluation result shows that Ctrl-C can effectively improve cache utilization in GPUs and achieve an average of 1.42x speedup for cache sensitive GPGPU workloads. Finally, GPU workloads and the co-located processes running on the host chip multiprocessor (CMP) in a heterogeneous system setup can contend for memory resources in multiple levels, resulting in significant performance degradation. To maximize the system throughput and balance the performance degradation of all co-located applications, I design a scalable performance degradation predictor specifically for heterogeneous systems, called HeteroPDP. HeteroPDP predicts the application execution time and schedules OpenCL workloads to run on different devices based on the optimization goal. The evaluation result shows HeteroPDP can improve the system fairness from 24% to 65% when an OpenCL application is co-located with other processes, and gain an additional 50% speedup compared with always offloading the OpenCL workload to GPUs. In summary, this dissertation aims to provide insights for the future microarchitecture and system architecture designs by identifying, analyzing, and addressing three critical performance problems in modern GPUs.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    A survey of near-data processing architectures for neural networks

    Get PDF
    Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning.This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant No 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00, and the ICREA Academia program.Peer ReviewedPostprint (published version

    Predicting Critical Warps in Near-Threshold GPGPU Applications Using a Dynamic Choke Point Analysis

    Get PDF
    General purpose graphics processing units (GP-GPU), owing to their enormous thread-level parallelism, can significantly improve the power consumption at the near-threshold (NTC) operating region, while offering close to a super-threshold performance. However, process variation (PV) can drastically reduce the GPU performance at NTC. In this work, choke points—a unique device-level characteristic of PV at NTC—that can exacerbate the warp criticality problem in GPUs have been explored. It is shown that the modern warp schedulers cannot tackle the choke point induced critical warps in an NTC GPU. Additionally, Choke Point Aware Warp Speculator, a circuit-architectural solution is proposed to dynamically predict the critical warps in GPUs, and accelerate them in their respective execution units. The best scheme achieves an average improvement of ∌39% in performance, and ∌31% in energy-efficiency, over one state-of-the-art warp scheduler, across 15 GPGPU applications, while incurring marginal hardware overheads

    Exploring Superpage Promotion Policies for Efficient Address Translation

    Get PDF
    Address translation performance for modern applications depends heavily upon the number of translation entries cached in the hardware TLB (translation look-aside buffer). Therefore, the efficiency of address translation relies directly on the TLB hit rate. The number of TLB entries continues to fall further behind the growth of memory consumption for modern applications. Superpages, which are pages with larger sizes, can increase the efficiency of the TLB by enabling each translation entry to cover a larger memory region. Without requiring more TLB entries, using superpages can increase the TLB hit rate and benefit address translation. However, using superpages can bring overhead. The TLB uses a single dirty bit to mark a page as dirty during address translation before modifying the page, so the granularity of the dirty bit corresponds to the coverage of the translation entry. As a result, the OS (operating system) will pay extra I/O effort when it allocates or writes an underutilized superpage back to disk. Such extra overhead can easily surpass the address translation benefits of superpages. This thesis discusses the performance trade-offs of superpages by exploring the design space of superpage promotion policies in the OS. A data collection infrastructure is built based on QEMU with kernel instrumentation on FreeBSD to collaboratively collect both memory accesses and kernel events. Then, the TLB behavior of Intel Skylake x86 family processors is simulated. The simulation has been validated to be faithful and consistent with the real-world performance. Last, this thesis evaluates and compares both TLB performance benefits and I/O overheads among the superpage promotion policies to discuss the trade-offs in the design space

    Savior: A Reliable Fault Resilient Router Architecture for Network-on-Chip

    Full text link
    [EN] The router plays an important role in communication among different processing cores in on-chip networks. Technology scaling on one hand has enabled the designers to integrate multiple processing components on a single chip; on the other hand, it becomes the reason for faults. A generic router consists of the buffers and pipeline stages. A single fault may result in an undesirable situation of degraded performance or a whole chip may stop working. Therefore, it is necessary to provide permanent fault tolerance to all the components of the router. In this paper, we propose a mechanism that can tolerate permanent faults that occur in the router. We exploit the fault-tolerant techniques of resource sharing and paring between components for the input port unit and routing computation (RC) unit, the resource borrowing for virtual channel allocator (VA) and multiple paths for switch allocator (SA) and crossbar (XB). The experimental results and analysis show that the proposed mechanism enhances the reliability of the router architecture towards permanent faults at the cost of 29% area overhead. The proposed router architecture achieves the highest Silicon Protection Factor (SPF) metric, which is 24.4 as compared to the state-of-the-art fault-tolerant architectures. It incurs an increase in latency for SPLASH2 and PARSEC benchmark traffics, which is minimal as compared to the baseline router.This work was supported by the Spanish 'Ministerio de Ciencia Innovacion y Universidades' and FEDER program in the framework of the 'Proyectos de I+D d Generacion de Conocimiento del Programa Estatal de Generacion de Conocimiento y Fortalecimiento Cientifico y Tecnologico del Sistema de I+D+i, Subprograma Estatal de Generacion de Conocimiento' (ref: PGC2018-095747-B-I00).Hussain, A.; Irfan, M.; Baloch, NK.; Draz, U.; Ali, T.; Glowacz, A.; Dunai, L.... (2020). Savior: A Reliable Fault Resilient Router Architecture for Network-on-Chip. Electronics. 9(11):1-18. https://doi.org/10.3390/electronics9111783S118911Borkar, S. (1999). Design challenges of technology scaling. IEEE Micro, 19(4), 23-29. doi:10.1109/40.782564Latif, K., Rahmani, A.-M., Nigussie, E., Seceleanu, T., Radetzki, M., & Tenhunen, H. (2013). Partial Virtual Channel Sharing: A Generic Methodology to Enhance Resource Management and Fault Tolerance in Networks-on-Chip. Journal of Electronic Testing, 29(3), 431-452. doi:10.1007/s10836-013-5389-5Borkar, S. (2005). Designing Reliable Systems from Unreliable Components: The Challenges of Transistor Variability and Degradation. IEEE Micro, 25(6), 10-16. doi:10.1109/mm.2005.110Ali, T., Noureen, J., Draz, U., Shaf, A., Yasin, S., & Ayaz, M. (2018). Participants Ranking Algorithm for Crowdsensing in Mobile Communication. ICST Transactions on Scalable Information Systems, 5(16), 154476. doi:10.4108/eai.13-4-2018.154476Ali, T., Draz, U., Yasin, S., Noureen, J., shaf, A., & Zardari, M. (2018). An Efficient Participant’s Selection Algorithm for Crowdsensing. International Journal of Advanced Computer Science and Applications, 9(1). doi:10.14569/ijacsa.2018.090154Poluri, P., & Louri, A. (2016). Shield: A Reliable Network-on-Chip Router Architecture for Chip Multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 27(10), 3058-3070. doi:10.1109/tpds.2016.2521641Valinataj, M., & Shahiri, M. (2016). A low-cost, fault-tolerant and high-performance router architecture for on-chip networks. Microprocessors and Microsystems, 45, 151-163. doi:10.1016/j.micpro.2016.04.009Kim, J., Nicopoulos, C., Park, D., Narayanan, V., Yousif, M. S., & Das, C. R. (2006). A Gracefully Degrading and Energy-Efficient Modular Router Architecture for On-Chip Networks. ACM SIGARCH Computer Architecture News, 34(2), 4-15. doi:10.1145/1150019.1136487Polian, I., & Hayes, J. P. (2011). Selective Hardening: Toward Cost-Effective Error Tolerance. IEEE Design & Test of Computers, 28(3), 54-63. doi:10.1109/mdt.2010.120Mohammed, H., Flayyih, W., & Rokhani, F. (2019). Tolerating Permanent Faults in the Input Port of the Network on Chip Router. Journal of Low Power Electronics and Applications, 9(1), 11. doi:10.3390/jlpea9010011Wang, L., Ma, S., Li, C., Chen, W., & Wang, Z. (2017). A high performance reliable NoC router. Integration, 58, 583-592. doi:10.1016/j.vlsi.2016.10.016Shafique, M. A., Baloch, N. K., Baig, M. I., Hussain, F., Zikria, Y. B., & Kim, S. W. (2020). NoCGuard: A Reliable Network-on-Chip Router Architecture. Electronics, 9(2), 342. doi:10.3390/electronics9020342Poluri, P., & Louri, A. (2015). A Soft Error Tolerant Network-on-Chip Router Pipeline for Multi-Core Systems. IEEE Computer Architecture Letters, 14(2), 107-110. doi:10.1109/lca.2014.2360686Feng, C., Lu, Z., Jantsch, A., Zhang, M., & Xing, Z. (2013). Addressing Transient and Permanent Faults in NoC With Efficient Fault-Tolerant Deflection Router. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 21(6), 1053-1066. doi:10.1109/tvlsi.2012.2204909Liu, J., Harkin, J., Li, Y., & Maguire, L. P. (2016). Fault-Tolerant Networks-on-Chip Routing With Coarse and Fine-Grained Look-Ahead. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 35(2), 260-273. doi:10.1109/tcad.2015.2459050Runge, A. (2015). FaFNoC: A Fault-tolerant and Bufferless Network-on-chip. Procedia Computer Science, 56, 397-402. doi:10.1016/j.procs.2015.07.226Binkert, N., Beckmann, B., Black, G., Reinhardt, S. K., Saidi, A., Basu, A., 
 Wood, D. A. (2011). The gem5 simulator. ACM SIGARCH Computer Architecture News, 39(2), 1-7. doi:10.1145/2024716.202471

    Vers la Compression à Tous les Niveaux de la Hiérarchie de la Mémoire

    Get PDF
    Hardware compression techniques are typically simplifications of software compression methods. They must, however, comply with area, power and latency constraints. This study unveils the challenges of adopting compression in memory design. The goal of this analysis is not to summarize proposals, but to put in evidence the solutions they employ to handle those challenges. An in-depth description of the main characteristics of multiple methods is provided, as well as criteria that can be used as a basis for the assessment of such schemes.Typically, these schemes are not very efficient, and those that do compress well decompress slowly. This work explores their granularity to redefine their perspectives and improve their efficiency, through a concept called Region-Chunk compression. Its goal is to achieve low (good) compression ratio and fast decompression latency. The key observation is that by further sub-dividing the chunks of data being compressed one can reduce data duplication. This concept can be applied to several previously proposed compressors, resulting in a reduction of their average compressed size. In particular, a single-cycle-decompression compressor is boosted to reach a compressibility level competitive to state-of-the-art proposals.Finally, to increase the probability of successfully co-allocating compressed lines, Pairwise Space Sharing (PSS) is proposed. PSS can be applied orthogonally to compaction methods at no extra latency penalty, and with a cost-effective metadata overhead. The proposed system (Region-Chunk+PSS) further enhances the normalized average cache capacity by 2.7% (geometric mean), while featuring short decompression latency.Les techniques de compression matĂ©rielle sont gĂ©nĂ©ralement des simplifications des mĂ©thodes de compression logicielle. Elles doivent, toutefois, se conformer aux contraintes de surface, de puissance et de latence. Cette Ă©tude dĂ©voile les dĂ©fis de l’adoption de la compression dans la conception de la mĂ©moire. Le but de l’analyse n’est pas de rĂ©sumer les propositions, mais de mettre en Ă©vidence les solutions qu’ils emploient pour relever ces dĂ©fis. Une description dĂ©taillĂ©e des principales caractĂ©ristiques de plusieurs mĂ©thodes est fournie, ainsi que des critĂšres qui peuvent ĂȘtre utilisĂ©s comme base pour l’évaluation de ces systĂšmes.GĂ©nĂ©ralement, ces schĂ©mas ne sont pas trĂšs efficaces, et les schĂ©mas qui compressent bien dĂ©compressent lentement. Ce travail explore leur granularitĂ© pour redĂ©finir leurs perspectives et amĂ©liorer leur efficacitĂ©, Ă  travers un concept appelĂ© compression Region-Chunk. Son objectif est d’obtenir un haut (bon) taux de compression et une latence de dĂ©compression rapide. L’observation clĂ© est qu’en subdivisant davantage les blocs de donnĂ©es compressĂ©s, on peut rĂ©duire la duplication des donnĂ©es. Ce concept peut ĂȘtre appliquĂ© Ă  plusieurs compresseurs prĂ©cĂ©demment proposĂ©s, entraĂźnant une rĂ©duction de leur taille moyenne compressĂ©e. En particulier, un compresseur Ă  dĂ©compression Ă  cycle unique est boostĂ© pour atteindre un niveau de compressibilitĂ© compĂ©titif par rapport aux propositions de pointe.Enfin, pour augmenter la probabilitĂ© de co-allouer avec succĂšs des lignes compressĂ©es, Pairwise Space Sharing (PSS) est proposĂ©. PSS peutĂȘtre appliquĂ© orthogonalement aux mĂ©thodes de compactage sans pĂ©nalitĂ© de latence supplĂ©mentaire, et avec une surcharge de mĂ©tadonnĂ©es rentable. Le systĂšme proposĂ© (Region-Chunk + PSS) amĂ©liore encore la capacitĂ© normalisĂ© moyenne du cache de 2,7% (moyenne gĂ©omĂ©trique), tout en offrant une courte latence de dĂ©compression
    corecore