30,934 research outputs found
SECURE VIDEO CODED SYSTEM MODEL
In this paper, overall system model, shown in Figure (1), of video compression-encryption-transmitter/decompression-dencryption-receiver was designed and implemented. The modified video codec system has used and in addition to compression/decompression, theencryption/decryption video signal by using chaotic neural network (CNN) algorithm was done. Both of quantized vector data and motion vector data have been encrypted by CNN. The compressed and encrypted video data stream has been sent to receiver by using orthogonal frequency division multiplexing (OFDM) modulation technique. The system model was designed according to video signal sample size of 176 × 144 (QCIFstandard format) with rate of 30 frames per second. Overall system model integrates and operates successfully with acceptable performance results
Video data compression using artificial neural network differential vector quantization
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes
Cloud Chaser: Real Time Deep Learning Computer Vision on Low Computing Power Devices
Internet of Things(IoT) devices, mobile phones, and robotic systems are often
denied the power of deep learning algorithms due to their limited computing
power. However, to provide time-critical services such as emergency response,
home assistance, surveillance, etc, these devices often need real-time analysis
of their camera data. This paper strives to offer a viable approach to
integrate high-performance deep learning-based computer vision algorithms with
low-resource and low-power devices by leveraging the computing power of the
cloud. By offloading the computation work to the cloud, no dedicated hardware
is needed to enable deep neural networks on existing low computing power
devices. A Raspberry Pi based robot, Cloud Chaser, is built to demonstrate the
power of using cloud computing to perform real-time vision tasks. Furthermore,
to reduce latency and improve real-time performance, compression algorithms are
proposed and evaluated for streaming real-time video frames to the cloud.Comment: Accepted to The 11th International Conference on Machine Vision (ICMV
2018). Project site: https://zhengyiluo.github.io/projects/cloudchaser
Instance-Adaptive Video Compression: Improving Neural Codecs by Training on the Test Set
We introduce a video compression algorithm based on instance-adaptive
learning. On each video sequence to be transmitted, we finetune a pretrained
compression model. The optimal parameters are transmitted to the receiver along
with the latent code. By entropy-coding the parameter updates under a suitable
mixture model prior, we ensure that the network parameters can be encoded
efficiently. This instance-adaptive compression algorithm is agnostic about the
choice of base model and has the potential to improve any neural video codec.
On UVG, HEVC, and Xiph datasets, our codec improves the performance of a
scale-space flow model by between 21% and 27% BD-rate savings, and that of a
state-of-the-art B-frame model by 17 to 20% BD-rate savings. We also
demonstrate that instance-adaptive finetuning improves the robustness to domain
shift. Finally, our approach reduces the capacity requirements of compression
models. We show that it enables a competitive performance even after reducing
the network size by 70%.Comment: Matches version published in TML
Scalable Compression of Deep Neural Networks
Deep neural networks generally involve some layers with mil- lions of
parameters, making them difficult to be deployed and updated on devices with
limited resources such as mobile phones and other smart embedded systems. In
this paper, we propose a scalable representation of the network parameters, so
that different applications can select the most suitable bit rate of the
network based on their own storage constraints. Moreover, when a device needs
to upgrade to a high-rate network, the existing low-rate network can be reused,
and only some incremental data are needed to be downloaded. We first
hierarchically quantize the weights of a pre-trained deep neural network to
enforce weight sharing. Next, we adaptively select the bits assigned to each
layer given the total bit budget. After that, we retrain the network to
fine-tune the quantized centroids. Experimental results show that our method
can achieve scalable compression with graceful degradation in the performance.Comment: 5 pages, 4 figures, ACM Multimedia 201
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
The field of video compression has developed some of the most sophisticated
and efficient compression algorithms known in the literature, enabling very
high compressibility for little loss of information. Whilst some of these
techniques are domain specific, many of their underlying principles are
universal in that they can be adapted and applied for compressing different
types of data. In this work we present DeepCABAC, a compression algorithm for
deep neural networks that is based on one of the state-of-the-art video coding
techniques. Concretely, it applies a Context-based Adaptive Binary Arithmetic
Coder (CABAC) to the network's parameters, which was originally designed for
the H.264/AVC video coding standard and became the state-of-the-art for
lossless compression. Moreover, DeepCABAC employs a novel quantization scheme
that minimizes the rate-distortion function while simultaneously taking the
impact of quantization onto the accuracy of the network into account.
Experimental results show that DeepCABAC consistently attains higher
compression rates than previously proposed coding techniques for neural network
compression. For instance, it is able to compress the VGG16 ImageNet model by
x63.6 with no loss of accuracy, thus being able to represent the entire network
with merely 8.7MB. The source code for encoding and decoding can be found at
https://github.com/fraunhoferhhi/DeepCABAC
- …