3,550 research outputs found

    Low-latency compression of mocap data using learned spatial decorrelation transform

    Full text link
    Due to the growing needs of human motion capture (mocap) in movie, video games, sports, etc., it is highly desired to compress mocap data for efficient storage and transmission. This paper presents two efficient frameworks for compressing human mocap data with low latency. The first framework processes the data in a frame-by-frame manner so that it is ideal for mocap data streaming and time critical applications. The second one is clip-based and provides a flexible tradeoff between latency and compression performance. Since mocap data exhibits some unique spatial characteristics, we propose a very effective transform, namely learned orthogonal transform (LOT), for reducing the spatial redundancy. The LOT problem is formulated as minimizing square error regularized by orthogonality and sparsity and solved via alternating iteration. We also adopt a predictive coding and temporal DCT for temporal decorrelation in the frame- and clip-based frameworks, respectively. Experimental results show that the proposed frameworks can produce higher compression performance at lower computational cost and latency than the state-of-the-art methods.Comment: 15 pages, 9 figure

    Low complexity video compression using moving edge detection based on DCT coefficients

    Get PDF
    In this paper, we propose a new low complexity video compression method based on detecting blocks containing moving edges us- ing only DCT coe±cients. The detection, whilst being very e±cient, also allows e±cient motion estimation by constraining the search process to moving macro-blocks only. The encoders PSNR is degraded by 2dB com- pared to H.264/AVC inter for such scenarios, whilst requiring only 5% of the execution time. The computational complexity of our approach is comparable to that of the DISCOVER codec which is the state of the art low complexity distributed video coding. The proposed method ¯nds blocks with moving edge blocks and processes only selected blocks. The approach is particularly suited to surveillance type scenarios with a static camera

    A new adaptive interframe transform coding using directional classification

    Get PDF
    Version of RecordPublishe

    Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks

    Get PDF
    Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet-of-Things applications, surveillance systems and semantic crawlers of large video repositories, the video capture and the CNN-based semantic analysis parts do not tend to be co-located. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification models that directly ingest AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video bitstreams and applying complex optical flow calculations prior to CNN processing, we only retain motion vector and select texture information at significantly-reduced bitrates and apply no additional processing prior to CNN ingestion. Based on three CNN architectures and two action recognition datasets, we achieve 11%-94% saving in bitrate with marginal effect on classification accuracy. A model-based selection between multiple CNNs increases these savings further, to the point where, if up to 7% loss of accuracy can be tolerated, video classification can take place with as little as 3 kbps for the transport of the required compressed video information to the system implementing the CNN models

    Localized temporal decorrelation for video compression

    Get PDF
    Many of the current video compression algorithms perform analysis and coding operations in a block-wise manner. Most of them use a motion compensated DCT algorithm as the basis. Many other codecs, mostly academic and in their infancy and known as Second Generation techniques, utilize region and contour based and model based techniques. Unfortunately, these second-generation methods have not been successful in gaining widespread acceptance in both the standards and the consumer world. Many of them require specialized computationally intensive software and/or hardware. Due to these shortcomings, current block based methods have been finetuned to get better performance at even very low bit rates (sub 64 kbps). Block based motion estimation is the principal mechanism used to compensate for motion between frames in an image sequence. Although current algorithms are fast and quite effective, they fail in compensating for uncovered background areas in a frame. Solutions such as hierarchical motion estimation schemes do not work very well since there is no reference in past, and in some cases, future frames for an uncovered background resulting in the block being transmitted as an intra frame (which requires the most bandwidth among all type of blocks). This thesis intro duces an intermediate stage, which compensates for these isolated uncovered areas. The intermediate stage uses a localized decorrelation technique to reduce frame to frame temporal redundancies. The algorithm can be easily incorporated into exist ing systems to achieve an even better performance and can be easily extended as a scalable video coding architecture. Experimental results show that the algorithm, used in conjunction with motion estimation, is quite effective in reducing temporal redundancies

    PEA265: Perceptual Assessment of Video Compression Artifacts

    Full text link
    The most widely used video encoders share a common hybrid coding framework that includes block-based motion estimation/compensation and block-based transform coding. Despite their high coding efficiency, the encoded videos often exhibit visually annoying artifacts, denoted as Perceivable Encoding Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience (QoE) of end users. To monitor and improve visual QoE, it is crucial to develop subjective and objective measures that can identify and quantify various types of PEAs. In this work, we make the first attempt to build a large-scale subjectlabelled database composed of H.265/HEVC compressed videos containing various PEAs. The database, namely the PEA265 database, includes 4 types of spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types of temporal PEAs (i.e. flickering and floating). Each containing at least 60,000 image or video patches with positive and negative labels. To objectively identify these PEAs, we train Convolutional Neural Networks (CNNs) using the PEA265 database. It appears that state-of-theart ResNeXt is capable of identifying each type of PEAs with high accuracy. Furthermore, we define PEA pattern and PEA intensity measures to quantify PEA levels of compressed video sequence. We believe that the PEA265 database and our findings will benefit the future development of video quality assessment methods and perceptually motivated video encoders.Comment: 10 pages,15 figures,4 table

    Objective Classes for Micro-Facial Expression Recognition

    Full text link
    Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for journal revie
    corecore