198 research outputs found

    Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design

    Full text link
    Graph neural networks (GNNs) have shown significant accuracy improvements in a variety of graph learning domains, sparking considerable research interest. To translate these accuracy improvements into practical applications, it is essential to develop high-performance and efficient hardware acceleration for GNN models. However, designing GNN accelerators faces two fundamental challenges: the high bandwidth requirement of GNN models and the diversity of GNN models. Previous works have addressed the first challenge by using more expensive memory interfaces to achieve higher bandwidth. For the second challenge, existing works either support specific GNN models or have generic designs with poor hardware utilization. In this work, we tackle both challenges simultaneously. First, we identify a new type of partition-level operator fusion, which we utilize to internally reduce the high bandwidth requirement of GNNs. Next, we introduce partition-level multi-threading to schedule the concurrent processing of graph partitions, utilizing different hardware resources. To further reduce the extra on-chip memory required by multi-threading, we propose fine-grained graph partitioning to generate denser graph partitions. Importantly, these three methods make no assumptions about the targeted GNN models, addressing the challenge of model variety. We implement these methods in a framework called SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware accelerator. Our evaluation demonstrates that SwitchBlade achieves an average speedup of 1.85×1.85\times and energy savings of 19.03×19.03\times compared to the NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to state-of-the-art specialized accelerators

    BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering Realistic Bokeh

    Full text link
    A photo captured with bokeh effect often means objects in focus are sharp while the out-of-focus areas are all blurred. DSLR can easily render this kind of effect naturally. However, due to the limitation of sensors, smartphones cannot capture images with depth-of-field effects directly. In this paper, we propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware. Meanwhile, the GAN-based method and perceptual loss are combined for rendering a realistic bokeh effect in the stage of finetuning the model. Moreover, Instance Normalization(IN) is reimplemented in our network, which ensures our tflite model with IN can be accelerated on smartphone GPU. Experiments show that our method is able to render a high-quality bokeh effect and process one 1024×15361024 \times 1536 pixel image in 1.9 seconds on all smartphone chipsets. This approach ranked First in AIM 2020 Rendering Realistic Bokeh Challenge Track 1 \& Track 2.Comment: accepted by ECCV workshop 202

    Efficient Adaptive Activation Rounding for Post-Training Quantization

    Full text link
    Post-training quantization attracts increasing attention due to its convenience in deploying quantized neural networks. Although rounding-to-nearest remains the prevailing method for DNN quantization, prior research has demonstrated its suboptimal nature when applied to weight quantization. They propose optimizing weight rounding schemes by leveraging output error rather than the traditional weight quantization error. Our study reveals that similar rounding challenges also extend to activation quantization. Despite the easy generalization, the challenges lie in the dynamic nature of activation. Adaptive rounding is expected for varying activations and the method is subjected to runtime overhead. To tackle this, we propose the AQuant quantization framework with a novel perspective to reduce output error by adjusting rounding schemes of activations. Instead of using the constant rounding border 0.5 of the rounding-to-nearest operation, we make the border become a function w.r.t. the activation value to change the activation rounding by the adaptive border. To deal with the runtime overhead, we use a coarse-grained version of the border function. Finally, we introduce our framework to optimize the border function. Extensive experiments show that AQuant achieves notable improvements compared to state-of-the-art works and pushes the accuracy of ResNet-18 up to 60.31% under the 2-bit weight and activation quantization

    Weakly-Supervised Action Localization by Hierarchically-structured Latent Attention Modeling

    Full text link
    Weakly-supervised action localization aims to recognize and localize action instancese in untrimmed videos with only video-level labels. Most existing models rely on multiple instance learning(MIL), where the predictions of unlabeled instances are supervised by classifying labeled bags. The MIL-based methods are relatively well studied with cogent performance achieved on classification but not on localization. Generally, they locate temporal regions by the video-level classification but overlook the temporal variations of feature semantics. To address this problem, we propose a novel attention-based hierarchically-structured latent model to learn the temporal variations of feature semantics. Specifically, our model entails two components, the first is an unsupervised change-points detection module that detects change-points by learning the latent representations of video features in a temporal hierarchy based on their rates of change, and the second is an attention-based classification model that selects the change-points of the foreground as the boundaries. To evaluate the effectiveness of our model, we conduct extensive experiments on two benchmark datasets, THUMOS-14 and ActivityNet-v1.3. The experiments show that our method outperforms current state-of-the-art methods, and even achieves comparable performance with fully-supervised methods.Comment: Accepted to ICCV 2023. arXiv admin note: text overlap with arXiv:2203.15187, arXiv:2003.12424, arXiv:2104.02967 by other author

    A novel low-temperature fabrication approach of composite phase change materials for high temperature thermal energy storage

    Get PDF
    Phase change materials (PCMs) are generally integrated into matrix materials to form shape-stabilized composite heat storage materials (HSMs) used for high temperature thermal energy storage applications. The conventional fabrication of composite HSMs is prevalently implemented at quite high temperatures, which is energy-intensive and narrows down the range of applicable PCMs because of thermal decomposition. Therefore, this paper establishes a novel fabrication approach to accomplish highly dense matrix to encapsulate PCMs at extremely low temperatures, based on the recently developed cold sintering process. The feasibility of the proposed approach was demonstrated by a case study of NaNO3/Ca(OH)2 composite HSMs. It was observed that the Ca(OH)2 matrix formed dense microstructure with obvious sintered boundaries and successfully encapsulated NaNO3 as PCM. The HSMs maintained stable macroscopic shape after hundreds of thermal cycles, and exhibited an energy storage efficiency of 59.48%, little leakage of PCM, and good thermal stability. Mechanical tests indicated that the HSMs possessed excellent mechanical properties when the sintering pressure is over 220 MPa. The discharging time of stored heat was presented through infrared thermography, and the heat storage capacity measured for the composite HSMs was over four times as high as those of typical solid storage materials of sensible heat, which demonstrated their excellent heat storage performances. The HSMs can be used in the form of packed bed or parallel channel with multi-layered heat storage, which is beneficial for efficiently utilizing solar heat and improving the performance of current energy storage system. This study therefore provides a novel route for energy-saving and low-carbon fabrication of shape-stabilized composite HSMs

    Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training

    Full text link
    An activation function is an element-wise mathematical function and plays a crucial role in deep neural networks (DNN). Many novel and sophisticated activation functions have been proposed to improve the DNN accuracy but also consume massive memory in the training process with back-propagation. In this study, we propose the nested forward automatic differentiation (Forward-AD), specifically for the element-wise activation function for memory-efficient DNN training. We deploy nested Forward-AD in two widely-used deep learning frameworks, TensorFlow and PyTorch, which support the static and dynamic computation graph, respectively. Our evaluation shows that nested Forward-AD reduces the memory footprint by up to 1.97x than the baseline model and outperforms the recomputation by 20% under the same memory reduction ratio.Comment: 8 pages, ICCD 202
    • …
    corecore