'Institute of Electrical and Electronics Engineers (IEEE)'
Abstract
Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet -of- Things applications, surveillance systems and semantic crawlers of large video repositories, the compressed video content and the CNN-based semantic analysis parts do not tend to be co-located. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification that ingests AVC/H.264 encoded videos. Instead of entire compressed video bitstreams, we only retain motion vector and selected texture information at significantly reduced bitrates. Based on two CNN architectures and two action recognition datasets, we achieve 38%-59% saving in bitrate with marginal impact in classification accuracy. A simple rate-based selection between the two CNNs shows that even further bitrate savings are possible with graceful degradation in accuracy. This may allow for rate/accuracy-optimized CNN-based video classification over networks