1,837 research outputs found

    Fast object detection in compressed JPEG Images

    Full text link
    Object detection in still images has drawn a lot of attention over past few years, and with the advent of Deep Learning impressive performances have been achieved with numerous industrial applications. Most of these deep learning models rely on RGB images to localize and identify objects in the image. However in some application scenarii, images are compressed either for storage savings or fast transmission. Therefore a time consuming image decompression step is compulsory in order to apply the aforementioned deep models. To alleviate this drawback, we propose a fast deep architecture for object detection in JPEG images, one of the most widespread compression format. We train a neural network to detect objects based on the blockwise DCT (discrete cosine transform) coefficients {issued from} the JPEG compression algorithm. We modify the well-known Single Shot multibox Detector (SSD) by replacing its first layers with one convolutional layer dedicated to process the DCT inputs. Experimental evaluations on PASCAL VOC and industrial dataset comprising images of road traffic surveillance show that the model is about 2Ă—2\times faster than regular SSD with promising detection performances. To the best of our knowledge, this paper is the first to address detection in compressed JPEG images

    How Far Can We Get with Neural Networks Straight from JPEG?

    Full text link
    Convolutional neural networks (CNNs) have achieved astonishing advances over the past decade, defining state-of-the-art in several computer vision tasks. CNNs are capable of learning robust representations of the data directly from the RGB pixels. However, most image data are usually available in compressed format, from which the JPEG is the most widely used due to transmission and storage purposes demanding a preliminary decoding process that have a high computational load and memory usage. For this reason, deep learning methods capable of leaning directly from the compressed domain have been gaining attention in recent years. These methods adapt typical CNNs to work on the compressed domain, but the common architectural modifications lead to an increase in computational complexity and the number of parameters. In this paper, we investigate the usage of CNNs that are designed to work directly with the DCT coefficients available in JPEG compressed images, proposing a handcrafted and data-driven techniques for reducing the computational complexity and the number of parameters for these models in order to keep their computational cost similar to their RGB baselines. We make initial ablation studies on a subset of ImageNet in order to analyse the impact of different frequency ranges, image resolution, JPEG quality and classification task difficulty on the performance of the models. Then, we evaluate the models on the complete ImageNet dataset. Our results indicate that DCT models are capable of obtaining good performance, and that it is possible to reduce the computational complexity and the number of parameters from these models while retaining a similar classification accuracy through the use of our proposed techniques.Comment: arXiv admin note: substantial text overlap with arXiv:2012.1372
    • …
    corecore