2 research outputs found
Saliency-aware Stereoscopic Video Retargeting
Stereo video retargeting aims to resize an image to a desired aspect ratio.
The quality of retargeted videos can be significantly impacted by the stereo
videos spatial, temporal, and disparity coherence, all of which can be impacted
by the retargeting process. Due to the lack of a publicly accessible annotated
dataset, there is little research on deep learning-based methods for stereo
video retargeting. This paper proposes an unsupervised deep learning-based
stereo video retargeting network. Our model first detects the salient objects
and shifts and warps all objects such that it minimizes the distortion of the
salient parts of the stereo frames. We use 1D convolution for shifting the
salient objects and design a stereo video Transformer to assist the retargeting
process. To train the network, we use the parallax attention mechanism to fuse
the left and right views and feed the retargeted frames to a reconstruction
module that reverses the retargeted frames to the input frames. Therefore, the
network is trained in an unsupervised manner. Extensive qualitative and
quantitative experiments and ablation studies on KITTI stereo 2012 and 2015
datasets demonstrate the efficiency of the proposed method over the existing
state-of-the-art methods. The code is available at
https://github.com/z65451/SVR/.Comment: 8 pages excluding references. CVPRW conferenc
Face Synthesis and Partial Face Recognition from Multiple Videos
Surveillance videos provide rich information to identify people; however, they often contain partial facial images that make recognition of the person of interest difficult. The traditional method of partial face recognition uses a database that contains only full-frontal faces, resulting in a reduction in the performance of recognition models when partial face images are presented. In this study, we augmented the database of full-frontal face images and synthesized two- and three-dimensional facial images. We designed a method for partial face recognition from the augmented database. To synthesize the two-dimensional (2D) facial images, we divided the available video images into groups based on their similarity and chose a representative image from each group. Then, we fused each representative image with a full-frontal face image using the scale- invariant feature transform (SIFT) flow, and augmented the original database with the fused images. To design a partial face recognition algorithm, we carefully evaluated the similarity between a set of video images from cameras and an image from the augmented database by counting the number of keypoints given by the SIFT. Compared to competitive baselines, the proposed method of partial face recognition has the highest face recognition rates in four out of six test cases on the widely used ChokePoint dataset, using most subjects (so-called subject group B) in the gallery. The proposed method also has recognition rates of approximately 22% to 72% on the test cases. The 2D face synthesis was found to outperform the three-dimensional (3D) face synthesis on a large subject group, possibly because the method of 2D reconstruction retains important facial features. The methods of augmentation and partial-face recognition are simple and improve the face recognition rate of traditional methods