1,291 research outputs found

    An Image Compression Method Based on Wavelet Transform and Neural Network

    Get PDF
    Image compression is to compress the redundancy between the pixels as much as possible by using the correlation between the neighborhood pixels so as to reduce the transmission bandwidth and the storage space. This paper applies the integration of wavelet analysis and artificial neural network in the image compression, discusses its performance in the image compression theoretically, analyzes the multi-resolution analysis thought, constructs a wavelet neural network model which is used in the improved image compression and gives the corresponding algorithm. Only the weight in the output layer of the wavelet neural network needs training while the weight of the input layer can be determined according to the relationship between the interval of the sampling points and the interval of the compactly-supported intervals. Once determined, training is unnecessary, in this way, it accelerates the training speed of the wavelet neural network and solves the problem that it is difficult to determine the nodes of the hidden layer in the traditional neural network. The computer simulation experiment shows that the algorithm of this paper has more excellent compression effect than the traditional neural network method

    Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

    Full text link
    While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, "softer" decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate "soft" choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two "extremes" (i.e., full bitwidth and zero bitwidth). In this way, DFS can "fractionally" exploit a layer's expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at \link{https://github.com/Torment123/DFS}
    • …
    corecore