7,205 research outputs found
Real-time 3D reconstruction of non-rigid shapes with a single moving camera
© . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft
Histogram of Oriented Principal Components for Cross-View Action Recognition
Existing techniques for 3D action recognition are sensitive to viewpoint
variations because they extract features from depth images which are viewpoint
dependent. In contrast, we directly process pointclouds for cross-view action
recognition from unknown and unseen views. We propose the Histogram of Oriented
Principal Components (HOPC) descriptor that is robust to noise, viewpoint,
scale and action speed variations. At a 3D point, HOPC is computed by
projecting the three scaled eigenvectors of the pointcloud within its local
spatio-temporal support volume onto the vertices of a regular dodecahedron.
HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D
pointcloud sequences so that view-invariant STK descriptors (or Local HOPC
descriptors) at these key locations only are used for action recognition. We
also propose a global descriptor computed from the normalized spatio-temporal
distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the
performance of our proposed descriptors against nine existing techniques on two
cross-view and three single-view human action recognition datasets. The
Experimental results show that our techniques provide significant improvement
over state-of-the-art methods
DroTrack: High-speed Drone-based Object Tracking Under Uncertainty
We present DroTrack, a high-speed visual single-object tracking framework for
drone-captured video sequences. Most of the existing object tracking methods
are designed to tackle well-known challenges, such as occlusion and cluttered
backgrounds. The complex motion of drones, i.e., multiple degrees of freedom in
three-dimensional space, causes high uncertainty. The uncertainty problem leads
to inaccurate location predictions and fuzziness in scale estimations. DroTrack
solves such issues by discovering the dependency between object representation
and motion geometry. We implement an effective object segmentation based on
Fuzzy C Means (FCM). We incorporate the spatial information into the membership
function to cluster the most discriminative segments. We then enhance the
object segmentation by using a pre-trained Convolution Neural Network (CNN)
model. DroTrack also leverages the geometrical angular motion to estimate a
reliable object scale. We discuss the experimental results and performance
evaluation using two datasets of 51,462 drone-captured frames. The combination
of the FCM segmentation and the angular scaling increased DroTrack precision by
up to and decreased the centre location error by pixels on average.
DroTrack outperforms all the high-speed trackers and achieves comparable
results in comparison to deep learning trackers. DroTrack offers high frame
rates up to 1000 frame per second (fps) with the best location precision, more
than a set of state-of-the-art real-time trackers.Comment: 10 pages, 12 figures, FUZZ-IEEE 202
Occlusion resistant learning of intuitive physics from videos
To reach human performance on complex tasks, a key ability for artificial
systems is to understand physical interactions between objects, and predict
future outcomes of a situation. This ability, often referred to as intuitive
physics, has recently received attention and several methods were proposed to
learn these physical rules from video sequences. Yet, most of these methods are
restricted to the case where no, or only limited, occlusions occur. In this
work we propose a probabilistic formulation of learning intuitive physics in 3D
scenes with significant inter-object occlusions. In our formulation, object
positions are modeled as latent variables enabling the reconstruction of the
scene. We then propose a series of approximations that make this problem
tractable. Object proposals are linked across frames using a combination of a
recurrent interaction network, modeling the physics in object space, and a
compositional renderer, modeling the way in which objects project onto pixel
space. We demonstrate significant improvements over state-of-the-art in the
intuitive physics benchmark of IntPhys. We apply our method to a second dataset
with increasing levels of occlusions, showing it realistically predicts
segmentation masks up to 30 frames in the future. Finally, we also show results
on predicting motion of objects in real videos
Signals and Images in Sea Technologies
Life below water is the 14th Sustainable Development Goal (SDG) envisaged by the United Nations and is aimed at conserving and sustainably using the oceans, seas, and marine resources for sustainable development. It is not difficult to argue that signals and image technologies may play an essential role in achieving the foreseen targets linked to SDG 14. Besides increasing the general knowledge of ocean health by means of data analysis, methodologies based on signal and image processing can be helpful in environmental monitoring, in protecting and restoring ecosystems, in finding new sensor technologies for green routing and eco-friendly ships, in providing tools for implementing best practices for sustainable fishing, as well as in defining frameworks and intelligent systems for enforcing sea law and making the sea a safer and more secure place. Imaging is also a key element for the exploration of the underwater world for various scopes, ranging from the predictive maintenance of sub-sea pipelines and other infrastructure projects, to the discovery, documentation, and protection of sunken cultural heritage. The scope of this Special Issue encompasses investigations into techniques and ICT approaches and, in particular, the study and application of signal- and image-based methods and, in turn, exploration of the advantages of their application in the previously mentioned areas
- …