3,868 research outputs found
Future Frame Prediction for Anomaly Detection -- A New Baseline
Anomaly detection in videos refers to the identification of events that do
not conform to expected behavior. However, almost all existing methods tackle
the problem by minimizing the reconstruction errors of training data, which
cannot guarantee a larger reconstruction error for an abnormal event. In this
paper, we propose to tackle the anomaly detection problem within a video
prediction framework. To the best of our knowledge, this is the first work that
leverages the difference between a predicted future frame and its ground truth
to detect an abnormal event. To predict a future frame with higher quality for
normal events, other than the commonly used appearance (spatial) constraints on
intensity and gradient, we also introduce a motion (temporal) constraint in
video prediction by enforcing the optical flow between predicted frames and
ground truth frames to be consistent, and this is the first work that
introduces a temporal constraint into the video prediction task. Such spatial
and motion constraints facilitate the future frame prediction for normal
events, and consequently facilitate to identify those abnormal events that do
not conform the expectation. Extensive experiments on both a toy dataset and
some publicly available datasets validate the effectiveness of our method in
terms of robustness to the uncertainty in normal events and the sensitivity to
abnormal events.Comment: IEEE Conference on Computer Vision and Pattern Recognition 201
Minimax Iterative Dynamic Game: Application to Nonlinear Robot Control Tasks
Multistage decision policies provide useful control strategies in
high-dimensional state spaces, particularly in complex control tasks. However,
they exhibit weak performance guarantees in the presence of disturbance, model
mismatch, or model uncertainties. This brittleness limits their use in
high-risk scenarios. We present how to quantify the sensitivity of such
policies in order to inform of their robustness capacity. We also propose a
minimax iterative dynamic game framework for designing robust policies in the
presence of disturbance/uncertainties. We test the quantification hypothesis on
a carefully designed deep neural network policy; we then pose a minimax
iterative dynamic game (iDG) framework for improving policy robustness in the
presence of adversarial disturbances. We evaluate our iDG framework on a
mecanum-wheeled robot, whose goal is to find a ocally robust optimal multistage
policy that achieve a given goal-reaching task. The algorithm is simple and
adaptable for designing meta-learning/deep policies that are robust against
disturbances, model mismatch, or model uncertainties, up to a disturbance
bound. Videos of the results are on the author's website,
http://ecs.utdallas.edu/~opo140030/iros18/iros2018.html, while the codes for
reproducing our experiments are on github,
https://github.com/lakehanne/youbot/tree/rilqg. A self-contained environment
for reproducing our results is on docker,
https://hub.docker.com/r/lakehanne/youbotbuntu14/Comment: 2018 International Conference on Intelligent Robots and System
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
In this work we propose a novel model-based deep convolutional autoencoder
that addresses the highly challenging problem of reconstructing a 3D human face
from a single in-the-wild color image. To this end, we combine a convolutional
encoder network with an expert-designed generative model that serves as
decoder. The core innovation is our new differentiable parametric decoder that
encapsulates image formation analytically based on a generative model. Our
decoder takes as input a code vector with exactly defined semantic meaning that
encodes detailed face pose, shape, expression, skin reflectance and scene
illumination. Due to this new way of combining CNN-based with model-based face
reconstruction, the CNN-based encoder learns to extract semantically meaningful
parameters from a single monocular input image. For the first time, a CNN
encoder and an expert-designed generative model can be trained end-to-end in an
unsupervised manner, which renders training on very large (unlabeled) real
world data feasible. The obtained reconstructions compare favorably to current
state-of-the-art approaches in terms of quality and richness of representation.Comment: International Conference on Computer Vision (ICCV) 2017 (Oral), 13
page
- …