1,294 research outputs found
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision
(ICCV 2017
Action Recognition by Hierarchical Mid-level Action Elements
Realistic videos of human actions exhibit rich spatiotemporal structures at
multiple levels of granularity: an action can always be decomposed into
multiple finer-grained elements in both space and time. To capture this
intuition, we propose to represent videos by a hierarchy of mid-level action
elements (MAEs), where each MAE corresponds to an action-related spatiotemporal
segment in the video. We introduce an unsupervised method to generate this
representation from videos. Our method is capable of distinguishing
action-related segments from background segments and representing actions at
multiple spatiotemporal resolutions. Given a set of spatiotemporal segments
generated from the training data, we introduce a discriminative clustering
algorithm that automatically discovers MAEs at multiple levels of granularity.
We develop structured models that capture a rich set of spatial, temporal and
hierarchical relations among the segments, where the action label and multiple
levels of MAE labels are jointly inferred. The proposed model achieves
state-of-the-art performance in multiple action recognition benchmarks.
Moreover, we demonstrate the effectiveness of our model in real-world
applications such as action recognition in large-scale untrimmed videos and
action parsing
Event-based Vision for Early Prediction of Manipulation Actions
Neuromorphic visual sensors are artificial retinas that output sequences of
asynchronous events when brightness changes occur in the scene. These sensors
offer many advantages including very high temporal resolution, no motion blur
and smart data compression ideal for real-time processing. In this study, we
introduce an event-based dataset on fine-grained manipulation actions and
perform an experimental study on the use of transformers for action prediction
with events. There is enormous interest in the fields of cognitive robotics and
human-robot interaction on understanding and predicting human actions as early
as possible. Early prediction allows anticipating complex stages for planning,
enabling effective and real-time interaction. Our Transformer network uses
events to predict manipulation actions as they occur, using online inference.
The model succeeds at predicting actions early on, building up confidence over
time and achieving state-of-the-art classification. Moreover, the
attention-based transformer architecture allows us to study the role of the
spatio-temporal patterns selected by the model. Our experiments show that the
Transformer network captures action dynamic features outperforming video-based
approaches and succeeding with scenarios where the differences between actions
lie in very subtle cues. Finally, we release the new event dataset, which is
the first in the literature for manipulation action recognition. Code will be
available at https://github.com/DaniDeniz/EventVisionTransformer.Comment: 15 pages, 9 figure
Visual analysis and synthesis with physically grounded constraints
The past decade has witnessed remarkable progress in image-based, data-driven vision and graphics. However, existing approaches often treat the images as pure 2D signals and not as a 2D projection of the physical 3D world. As a result, a lot of training examples are required to cover sufficiently diverse appearances and inevitably suffer from limited generalization capability. In this thesis, I propose "inference-by-composition" approaches to overcome these limitations by modeling and interpreting visual signals in terms of physical surface, object, and scene. I show how we can incorporate physically grounded constraints such as scene-specific geometry in a non-parametric optimization framework for (1) revealing the missing parts of an image due to removal of a foreground or background element, (2) recovering high spatial frequency details that are not resolvable in low-resolution observations. I then extend the framework from 2D images to handle spatio-temporal visual data (videos). I demonstrate that we can convincingly fill spatio-temporal holes in a temporally coherent fashion by jointly reconstructing the appearance and motion. Compared to existing approaches, our technique can synthesize physically plausible contents even in challenging videos. For visual analysis, I apply stereo camera constraints for discovering multiple approximately linear structures in extremely noisy videos with an ecological application to bird migration monitoring at night. The resulting algorithms are simple and intuitive while achieving state-of-the-art performance without the need of training on an exhaustive set of visual examples
- …