380 research outputs found

    A Layer Decomposition-Recomposition Framework for Neuron Pruning towards Accurate Lightweight Networks

    Full text link
    Neuron pruning is an efficient method to compress the network into a slimmer one for reducing the computational cost and storage overhead. Most of state-of-the-art results are obtained in a layer-by-layer optimization mode. It discards the unimportant input neurons and uses the survived ones to reconstruct the output neurons approaching to the original ones in a layer-by-layer manner. However, an unnoticed problem arises that the information loss is accumulated as layer increases since the survived neurons still do not encode the entire information as before. A better alternative is to propagate the entire useful information to reconstruct the pruned layer instead of directly discarding the less important neurons. To this end, we propose a novel Layer Decomposition-Recomposition Framework (LDRF) for neuron pruning, by which each layer's output information is recovered in an embedding space and then propagated to reconstruct the following pruned layers with useful information preserved. We mainly conduct our experiments on ILSVRC-12 benchmark with VGG-16 and ResNet-50. What should be emphasized is that our results before end-to-end fine-tuning are significantly superior owing to the information-preserving property of our proposed framework.With end-to-end fine-tuning, we achieve state-of-the-art results of 5.13x and 3x speed-up with only 0.5% and 0.65% top-5 accuracy drop respectively, which outperform the existing neuron pruning methods.Comment: accepted by AAAI19 as ora

    Gene Transfer of Calcitonin Gene-Related Peptide Inhibits Macrophages and Inflammatory Mediators in Vein Graft Disease

    Get PDF
    Vein graft disease is a chronic inflammatory disease and limits the late results of coronary revascularization. Calcitonin gene-related peptide (CGRP) inhibits macrophages infiltrated and inflammatory mediators, we hypothesized that transfected CGRP gene inhibits macrophages infiltrated and inflammatory mediators in vein graft disease. Autologous rabbit jugular vein grafts were incubated ex vivo in a solution of mosaic adeno-associated virus vectors containing CGRP gene (AAV2/1.CGRP) 、escherichia coli lac Z gene (AAV2/1.LacZ) or saline and then interposed in the carotid artery. Intima/media ratio were evaluated at postoperative 4 weeks, Macrophages were marked with CD68 antibody by immunocytochemistry. Inflammatory mediators were mensurated with real-time PCR. Neointimal thickening was significantly suppressed in AAV2/1.CGRP group. Macrophages infiltrated and inflammatory mediators monocyte chemoattractant protein-1 (MCP-1)、tumor necrosis factorα(TNF-α)、inducible nitricoxide synthase (iNOS)、matrix metalloproteinase-9 (MMP-9) was significantly suppressed in AAV2/1.CGRP group.Gene transfected AAV2/1.CGRP suppressed neointimal hyperplasia in vein graft disease by suppressed macrophages infiltrated and inflammatory mediators

    Spin gap and magnetic resonance in superconducting BaFe1.9_{1.9}Ni%_{0.1}As2_{2}

    Full text link
    We use neutron spectroscopy to determine the nature of the magnetic excitations in superconducting BaFe1.9_{1.9}Ni0.1_{0.1}As2_{2} (Tc=20T_{c}=20 K). Above TcT_{c} the excitations are gapless and centered at the commensurate antiferromagnetic wave vector of the parent compound, while the intensity exhibits a sinusoidal modulation along the c-axis. As the superconducting state is entered a spin gap gradually opens, whose magnitude tracks the TT-dependence of the superconducting gap observed by angle resolved photoemission. Both the spin gap and magnetic resonance energies are temperature \textit{and} wave vector dependent, but their ratio is the same within uncertainties. These results suggest that the spin resonance is a singlet-triplet excitation related to electron pairing and superconductivity.Comment: 4 pages, 4 figure

    Neural Inheritance Relation Guided One-Shot Layer Assignment Search

    Full text link
    Layer assignment is seldom picked out as an independent research topic in neural architecture search. In this paper, for the first time, we systematically investigate the impact of different layer assignments to the network performance by building an architecture dataset of layer assignment on CIFAR-100. Through analyzing this dataset, we discover a neural inheritance relation among the networks with different layer assignments, that is, the optimal layer assignments for deeper networks always inherit from those for shallow networks. Inspired by this neural inheritance relation, we propose an efficient one-shot layer assignment search approach via inherited sampling. Specifically, the optimal layer assignment searched in the shallow network can be provided as a strong sampling priori to train and search the deeper ones in supernet, which extremely reduces the network search space. Comprehensive experiments carried out on CIFAR-100 illustrate the efficiency of our proposed method. Our search results are strongly consistent with the optimal ones directly selected from the architecture dataset. To further confirm the generalization of our proposed method, we also conduct experiments on Tiny-ImageNet and ImageNet. Our searched results are remarkably superior to the handcrafted ones under the unchanged computational budgets. The neural inheritance relation discovered in this paper can provide insights to the universal neural architecture search.Comment: AAAI202

    Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression

    Full text link
    Quantizing floating-point neural network to its fixed-point representation is crucial for Learned Image Compression (LIC) because it ensures the decoding consistency for interoperability and reduces space-time complexity for implementation. Existing solutions often have to retrain the network for model quantization which is time consuming and impractical. This work suggests the use of Post-Training Quantization (PTQ) to directly process pretrained, off-the-shelf LIC models. We theoretically prove that minimizing the mean squared error (MSE) in PTQ is sub-optimal for compression task and thus develop a novel Rate-Distortion (R-D) Optimized PTQ (RDO-PTQ) to best retain the compression performance. Such RDO-PTQ just needs to compress few images (e.g., 10) to optimize the transformation of weight, bias, and activation of underlying LIC model from its native 32-bit floating-point (FP32) format to 8-bit fixed-point (INT8) precision for fixed-point inference onwards. Experiments reveal outstanding efficiency of the proposed method on different LICs, showing the closest coding performance to their floating-point counterparts. And, our method is a lightweight and plug-and-play approach without any need of model retraining which is attractive to practitioners

    Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation

    Full text link
    Speaker diarization has gained considerable attention within speech processing research community. Mainstream speaker diarization rely primarily on speakers' voice characteristics extracted from acoustic signals and often overlook the potential of semantic information. Considering the fact that speech signals can efficiently convey the content of a speech, it is of our interest to fully exploit these semantic cues utilizing language models. In this work we propose a novel approach to effectively leverage semantic information in clustering-based speaker diarization systems. Firstly, we introduce spoken language understanding modules to extract speaker-related semantic information and utilize these information to construct pairwise constraints. Secondly, we present a novel framework to integrate these constraints into the speaker diarization pipeline, enhancing the performance of the entire system. Extensive experiments conducted on the public dataset demonstrate the consistent superiority of our proposed approach over acoustic-only speaker diarization systems.Comment: Submitted to ICASSP 202
    • …
    corecore