8 research outputs found

    GraphR: Accelerating Graph Processing Using ReRAM

    Full text link
    This paper presents GRAPHR, the first ReRAM-based graph processing accelerator. GRAPHR follows the principle of near-data processing and explores the opportunity of performing massive parallel analog operations with low hardware and energy cost. The analog computation is suit- able for graph processing because: 1) The algorithms are iterative and could inherently tolerate the imprecision; 2) Both probability calculation (e.g., PageRank and Collaborative Filtering) and typical graph algorithms involving integers (e.g., BFS/SSSP) are resilient to errors. The key insight of GRAPHR is that if a vertex program of a graph algorithm can be expressed in sparse matrix vector multiplication (SpMV), it can be efficiently performed by ReRAM crossbar. We show that this assumption is generally true for a large set of graph algorithms. GRAPHR is a novel accelerator architecture consisting of two components: memory ReRAM and graph engine (GE). The core graph computations are performed in sparse matrix format in GEs (ReRAM crossbars). The vector/matrix-based graph computation is not new, but ReRAM offers the unique opportunity to realize the massive parallelism with unprecedented energy efficiency and low hardware cost. With small subgraphs processed by GEs, the gain of performing parallel operations overshadows the wastes due to sparsity. The experiment results show that GRAPHR achieves a 16.01x (up to 132.67x) speedup and a 33.82x energy saving on geometric mean compared to a CPU baseline system. Com- pared to GPU, GRAPHR achieves 1.69x to 2.19x speedup and consumes 4.77x to 8.91x less energy. GRAPHR gains a speedup of 1.16x to 4.12x, and is 3.67x to 10.96x more energy efficiency compared to PIM-based architecture.Comment: Accepted to HPCA 201

    HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

    Get PDF
    With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used in many domains. To achieve high performance and energy efficiency, hardware acceleration (especially inference) of DNNs is intensively studied both in academia and industry. However, we still face two challenges: large DNN models and datasets, which incur frequent off-chip memory accesses; and the training of DNNs, which is not well-explored in recent accelerator designs. To truly provide high throughput and energy efficient acceleration for the training of deep and large models, we inevitably need to use multiple accelerators to explore the coarse-grain parallelism, compared to the fine-grain parallelism inside a layer considered in most of the existing architectures. It poses the key research question to seek the best organization of computation and dataflow among accelerators. In this paper, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators. HyPar partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors, and the error tensors for the DNN accelerators. A partition constitutes the choice of parallelism for weighted layers. The optimization target is to search a partition that minimizes the total communication during training a complete DNN. To solve this problem, we propose a communication model to explain the source and amount of communications. Then, we use a hierarchical layer-wise dynamic programming method to search for the partition for each layer.Comment: To appear in the 2019 25th International Symposium on High-Performance Computer Architecture (HPCA 2019

    Performance Evaluation and Optimization of HBM-Enabled GPU for Data-Intensive Applications

    No full text

    Hotspots and difficulties of biliary surgery in older patients

    No full text
    Abstract. With the accelerated aging society in China, the incidence of biliary surgical diseases in the elderly has increased significantly. The clinical characteristics of these patients indicate that improving treatment outcomes and realizing healthy aging are worthy of attention. How to effectively improve the treatment effect of geriatric biliary surgical diseases has attracted widespread attention. This paper reviews and comments on the hotspots and difficulties of biliary surgery in older patients from six aspects: (1) higher morbidity associated with an aging society, (2) prevention and control of pre-operative risks, (3) extending the indications of laparoscopic surgery, (4) urgent standardization of minimally invasive surgery, (5) precise technological progress in hepatobiliary surgery, and (6) guarantee of peri-operative safety. It is of great significance to fully understand the focus of controversy, actively make use of its favorable factors, and effectively avoid its unfavorable factors, for further improving the therapeutic effects of geriatric biliary surgical diseases, and thus benefits the vast older patients with biliary surgical diseases. Accordingly, a historical record with the highest age of 93 years for laparoscopic transcystic common bile duct exploration has been created by us recently