431 research outputs found
Recommended from our members
Hierarchical video summarisation in reference frame subspace
In this paper, a hierarchical video structure summarization approach using Laplacian Eigenmap is proposed, where a small set of reference frames is selected from the video sequence to form a reference subspace to measure the dissimilarity between two arbitrary frames. In the proposed summarization scheme, the shot-level key frames are first detected from the continuity of inter-frame dissimilarity, and the sub-shot level and scene level representative frames are then summarized by using k-mean clustering. The experiment is carried on both test videos and movies, and the results show that in comparison with a similar approach using latent semantic analysis, the proposed approach using Laplacian Eigenmap can achieve a better recall rate in keyframe detection, and gives an efficient hierarchical summarization at sub shot, shot and scene levels subsequently
Human Motion Capture Data Tailored Transform Coding
Human motion capture (mocap) is a widely used technique for digitalizing
human movements. With growing usage, compressing mocap data has received
increasing attention, since compact data size enables efficient storage and
transmission. Our analysis shows that mocap data have some unique
characteristics that distinguish themselves from images and videos. Therefore,
directly borrowing image or video compression techniques, such as discrete
cosine transform, does not work well. In this paper, we propose a novel
mocap-tailored transform coding algorithm that takes advantage of these
features. Our algorithm segments the input mocap sequences into clips, which
are represented in 2D matrices. Then it computes a set of data-dependent
orthogonal bases to transform the matrices to frequency domain, in which the
transform coefficients have significantly less dependency. Finally, the
compression is obtained by entropy coding of the quantized coefficients and the
bases. Our method has low computational cost and can be easily extended to
compress mocap databases. It also requires neither training nor complicated
parameter setting. Experimental results demonstrate that the proposed scheme
significantly outperforms state-of-the-art algorithms in terms of compression
performance and speed
3D head motion, point-of-regard and encoded gaze fixations in real scenes: next-generation portable video-based monocular eye tracking
Portable eye trackers allow us to see where a subject is looking when performing a natural task with free head and body movements. These eye trackers include headgear containing a camera directed at one of the subject\u27s eyes (the eye camera) and another camera (the scene camera) positioned above the same eye directed along the subject\u27s line-of-sight. The output video includes the scene video with a crosshair depicting where the subject is looking -- the point-of-regard (POR) -- that is updated for each frame. This video may be the desired final result or it may be further analyzed to obtain more specific information about the subject\u27s visual strategies. A list of the calculated POR positions in the scene video can also be analyzed. The goals of this project are to expand the information that we can obtain from a portable video-based monocular eye tracker and to minimize the amount of user interaction required to obtain and analyze this information. This work includes offline processing of both the eye and scene videos to obtain robust 2D PORs in scene video frames, identify gaze fixations from these PORs, obtain 3D head motion and ray trace fixations through volumes-of-interest (VOIs) to determine what is being fixated, when and where (3D POR). To avoid the redundancy of ray tracing a 2D POR in every video frame and to group these POR data meaningfully, a fixation-identification algorithm is employed to simplify the long list of 2D POR data into gaze fixations. In order to ray trace these fixations, the 3D motion -- position and orientation over time -- of the scene camera is computed. This camera motion is determined via an iterative structure and motion recovery algorithm that requires a calibrated camera and knowledge of the 3D location of at least four points in the scene (that can be selected from premeasured VOI vertices). The subjects 3D head motion is obtained directly from this camera motion. For the final stage of the algorithm, the 3D locations and dimensions of VOIs in the scene are required. This VOI information in world coordinates is converted to camera coordinates for ray tracing. A representative 2D POR position for each fixation is converted from image coordinates to the same camera coordinate system. Then, a ray is traced from the camera center through this position to determine which (if any) VOI is being fixated and where it is being fixated -- the 3D POR in the world. Results are presented for various real scenes. Novel visualizations of portable eye tracker data created using the results of our algorithm are also presented
Deep Features and Clustering Based Keyframes Selection with Security
The digital world is developing more quickly than ever. Multimedia processing and distribution, however become vulnerable issues due to the enormous quantity and significance of vital information. Therefore, extensive technologies and algorithms are required for the safe transmission of messages, images, and video files. This paper proposes a secure framework by acute integration of video summarization and image encryption. Three parts comprise the proposed cryptosystem framework. The informative frames are first extracted using an efficient and lightweight technique that make use of the color histogram-clustering (RGB-HSV) approach's processing capabilities. Each frame of a video is represented by deep features, which are based on an enhanced pre-trained Inception-v3 network. After that summary is obtain using the K-means optimal clustering algorithm. The representative keyframes then extracted using the clusters highest possible entropy nodes. Experimental validation on two well-known standard datasets demonstrates the proposed methods superiority to numerous state-of-the-art approaches. Finally, the proposed framework performs an efficient image encryption and decryption algorithm by employing a general linear group function GLn (F). The analysis and testing outcomes prove the superiority of the proposed adaptive RSA
TemporalStereo: Efficient Spatial-Temporal Stereo Matching Network
We present TemporalStereo, a coarse-to-fine based online stereo matching
network which is highly efficient, and able to effectively exploit the past
geometry and context information to boost the matching accuracy. Our network
leverages sparse cost volume and proves to be effective when a single stereo
pair is given, however, its peculiar ability to use spatio-temporal information
across frames allows TemporalStereo to alleviate problems such as occlusions
and reflective regions while enjoying high efficiency also in the case of
stereo sequences. Notably our model trained, once with stereo videos, can run
in both single-pair and temporal ways seamlessly. Experiments show that our
network relying on camera motion is even robust to dynamic objects when running
on videos. We validate TemporalStereo through extensive experiments on
synthetic (SceneFlow, TartanAir) and real (KITTI 2012, KITTI 2015) datasets.
Detailed results show that our model achieves state-of-the-art performance on
any of these datasets. Code is available at
\url{https://github.com/youmi-zym/TemporalStereo.git}
Direct Optimisation of for HDR Content Adaptive Transcoding in AV1
Since the adoption of VP9 by Netflix in 2016, royalty-free coding standards
continued to gain prominence through the activities of the AOMedia consortium.
AV1, the latest open source standard, is now widely supported. In the early
years after standardisation, HDR video tends to be under served in open source
encoders for a variety of reasons including the relatively small amount of true
HDR content being broadcast and the challenges in RD optimisation with that
material. AV1 codec optimisation has been ongoing since 2020 including
consideration of the computational load. In this paper, we explore the idea of
direct optimisation of the Lagrangian parameter used in the rate
control of the encoders to estimate the optimal Rate-Distortion trade-off
achievable for a High Dynamic Range signalled video clip. We show that by
adjusting the Lagrange multiplier in the RD optimisation process on a
frame-hierarchy basis, we are able to increase the Bjontegaard difference rate
gains by more than 3.98 on average without visually affecting the
quality.Comment: SPIE2022:Applications of Digital Image Processing XLV accepted
manuscrip
A Survey on Deep Learning Technique for Video Segmentation
Video segmentation -- partitioning video frames into multiple segments or
objects -- plays a critical role in a broad range of practical applications,
from enhancing visual effects in movie, to understanding scenes in autonomous
driving, to creating virtual background in video conferencing. Recently, with
the renaissance of connectionism in computer vision, there has been an influx
of deep learning based approaches for video segmentation that have delivered
compelling performance. In this survey, we comprehensively review two basic
lines of research -- generic object segmentation (of unknown categories) in
videos, and video semantic segmentation -- by introducing their respective task
settings, background concepts, perceived need, development history, and main
challenges. We also offer a detailed overview of representative literature on
both methods and datasets. We further benchmark the reviewed methods on several
well-known datasets. Finally, we point out open issues in this field, and
suggest opportunities for further research. We also provide a public website to
continuously track developments in this fast advancing field:
https://github.com/tfzhou/VS-Survey.Comment: Accepted by TPAMI. Website: https://github.com/tfzhou/VS-Surve
- …