85,607 research outputs found
Estimating position & velocity in 3D space from monocular video sequences using a deep neural network
This work describes a regression model based on Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) networks for tracking objects from monocular video sequences. The target application being pursued is Vision-Based Sensor Substitution (VBSS). In particular, the tool-tip position and velocity in 3D space of a pair of surgical robotic instruments (SRI) are estimated for three surgical tasks, namely suturing, needle-passing and knot-tying. The CNN extracts features from individual video frames and the LSTM network processes these features over time and continuously outputs a 12-dimensional vector with the estimated position and velocity values. A series of analyses and experiments are carried out in the regression model to reveal the benefits and drawbacks of different design choices. First, the impact of the loss function is investigated by adequately weighing the Root Mean Squared Error (RMSE) and Gradient Difference Loss (GDL), using the VGG16 neural network for feature extraction. Second, this analysis is extended to a Residual Neural Network designed for feature extraction, which has fewer parameters than the VGG16 model, resulting in a reduction of ~96.44 % in the neural network size. Third, the impact of the number of time steps used to model the temporal information processed by the LSTM network is investigated. Finally, the capability of the regression model to generalize to the data related to "unseen" surgical tasks (unavailable in the training set) is evaluated. The aforesaid analyses are experimentally validated on the public dataset JIGSAWS. These analyses provide some guidelines for the design of a regression model in the context of VBSS, specifically when the objective is to estimate a set of 1D time series signals from video sequences.Peer ReviewedPostprint (author's final draft
Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification
Despite the steady progress in video analysis led by the adoption of
convolutional neural networks (CNNs), the relative improvement has been less
drastic as that in 2D static image classification. Three main challenges exist
including spatial (image) feature representation, temporal information
representation, and model/computation complexity. It was recently shown by
Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained
on ImageNet, could be a promising way for spatial and temporal representation
learning. However, as for model/computation complexity, 3D CNNs are much more
expensive than 2D CNNs and prone to overfit. We seek a balance between speed
and accuracy by building an effective and efficient video classification system
through systematic exploration of critical network design choices. In
particular, we show that it is possible to replace many of the 3D convolutions
by low-cost 2D convolutions. Rather surprisingly, best result (in both speed
and accuracy) is achieved when replacing the 3D convolutions at the bottom of
the network, suggesting that temporal representation learning on high-level
semantic features is more useful. Our conclusion generalizes to datasets with
very different properties. When combined with several other cost-effective
designs including separable spatial/temporal convolution and feature gating,
our system results in an effective video classification system that that
produces very competitive results on several action classification benchmarks
(Kinetics, Something-something, UCF101 and HMDB), as well as two action
detection (localization) benchmarks (JHMDB and UCF101-24).Comment: ECCV 2018 camera read
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition
Recently proposed robust 3D face alignment methods establish either dense or
sparse correspondence between a 3D face model and a 2D facial image. The use of
these methods presents new challenges as well as opportunities for facial
texture analysis. In particular, by sampling the image using the fitted model,
a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map
is always incomplete. In this paper, we propose a framework for training Deep
Convolutional Neural Network (DCNN) to complete the facial UV map extracted
from in-the-wild images. To this end, we first gather complete UV maps by
fitting a 3D Morphable Model (3DMM) to various multiview image and video
datasets, as well as leveraging on a new 3D dataset with over 3,000 identities.
Second, we devise a meticulously designed architecture that combines local and
global adversarial DCNNs to learn an identity-preserving facial UV completion
model. We demonstrate that by attaching the completed UV to the fitted mesh and
generating instances of arbitrary poses, we can increase pose variations for
training deep face recognition/verification models, and minimise pose
discrepancy during testing, which lead to better performance. Experiments on
both controlled and in-the-wild UV datasets prove the effectiveness of our
adversarial UV completion model. We achieve state-of-the-art verification
accuracy, , under the CFP frontal-profile protocol only by combining
pose augmentation during training and pose discrepancy reduction during
testing. We will release the first in-the-wild UV dataset (we refer as WildUV)
that comprises of complete facial UV maps from 1,892 identities for research
purposes
Application of the self-organising map to trajectory classification
This paper presents an approach to the problem of automatically classifying events detected by video surveillance systems; specifically, of detecting unusual or suspicious movements. Approaches to this problem typically involve building complex 3D-models in real-world coordinates
to provide trajectory information for the classifier. In this paper we show that analysis of trajectories may be carried out in a model-free fashion, using self-organising
feature map neural networks to learn the characteristics of normal trajectories, and to detect novel ones. Trajectories are represented using positional and first and second order motion information, with moving-average smoothing. This allows novelty detection to be applied on a point-by-point basis in real time, and permits both instantaneous motion and whole trajectory motion to be subjected to novelty detection
Using selfsupervised algorithms for video analysis and scene detection
With the increasing available audiovisual content, well-ordered and effective management of video is desired, and therefore, automatic, and accurate solutions for video indexing and retrieval are needed. Self-supervised learning algorithms with 3D convolutional neural networks are a promising solution for these tasks, thanks to its independence from human-annotations and its suitability to identify spatio-temporal features. This work presents a self-supervised algorithm for the analysis of video shots, accomplished by a two-stage implementation: 1- An algorithm that generates pseudo-labels for 20-frame samples with different automatically generated shot transitions (Hardcuts/Cropcuts, Dissolves, Fades in/out, Wipes) and 2- A fully convolutional 3D trained network with an overall achieved accuracy greater than 97% in the testing set. The model implemented is based in [5], improving the detection of large smooth transitions by implementing a larger temporal context. The transitions detected occur centered in the 10th and 11th frames of a 20-frame input window
Understanding User Behavior in Volumetric Video Watching: Dataset, Analysis and Prediction
Volumetric video emerges as a new attractive video paradigm in recent years
since it provides an immersive and interactive 3D viewing experience with six
degree-of-freedom (DoF). Unlike traditional 2D or panoramic videos, volumetric
videos require dense point clouds, voxels, meshes, or huge neural models to
depict volumetric scenes, which results in a prohibitively high bandwidth
burden for video delivery. Users' behavior analysis, especially the viewport
and gaze analysis, then plays a significant role in prioritizing the content
streaming within users' viewport and degrading the remaining content to
maximize user QoE with limited bandwidth. Although understanding user behavior
is crucial, to the best of our best knowledge, there are no available 3D
volumetric video viewing datasets containing fine-grained user interactivity
features, not to mention further analysis and behavior prediction. In this
paper, we for the first time release a volumetric video viewing behavior
dataset, with a large scale, multiple dimensions, and diverse conditions. We
conduct an in-depth analysis to understand user behaviors when viewing
volumetric videos. Interesting findings on user viewport, gaze, and motion
preference related to different videos and users are revealed. We finally
design a transformer-based viewport prediction model that fuses the features of
both gaze and motion, which is able to achieve high accuracy at various
conditions. Our prediction model is expected to further benefit volumetric
video streaming optimization. Our dataset, along with the corresponding
visualization tools is accessible at
https://cuhksz-inml.github.io/user-behavior-in-vv-watching/Comment: Accepted by ACM MM'2
- …