8 research outputs found
Non-Causal Tracking by Deblatting
Tracking by Deblatting stands for solving an inverse problem of deblurring
and image matting for tracking motion-blurred objects. We propose non-causal
Tracking by Deblatting which estimates continuous, complete and accurate object
trajectories. Energy minimization by dynamic programming is used to detect
abrupt changes of motion, called bounces. High-order polynomials are fitted to
segments, which are parts of the trajectory separated by bounces. The output is
a continuous trajectory function which assigns location for every real-valued
time stamp from zero to the number of frames. Additionally, we show that from
the trajectory function precise physical calculations are possible, such as
radius, gravity or sub-frame object velocity. Velocity estimation is compared
to the high-speed camera measurements and radars. Results show high performance
of the proposed method in terms of Trajectory-IoU, recall and velocity
estimation.Comment: Published at GCPR 2019, oral presentation, Best Paper Honorable
Mention Awar
Sub-frame Appearance and 6D Pose Estimation of Fast Moving Objects
We propose a novel method that tracks fast moving objects, mainly non-uniform
spherical, in full 6 degrees of freedom, estimating simultaneously their 3D
motion trajectory, 3D pose and object appearance changes with a time step that
is a fraction of the video frame exposure time. The sub-frame object
localization and appearance estimation allows realistic temporal
super-resolution and precise shape estimation. The method, called TbD-3D
(Tracking by Deblatting in 3D) relies on a novel reconstruction algorithm which
solves a piece-wise deblurring and matting problem. The 3D rotation is
estimated by minimizing the reprojection error. As a second contribution, we
present a new challenging dataset with fast moving objects that change their
appearance and distance to the camera. High speed camera recordings with zero
lag between frame exposures were used to generate videos with different frame
rates annotated with ground-truth trajectory and pose
Motion-From-Blur: 3D Shape and Motion Estimation of Motion-Blurred Objects in Videos
We propose a method for jointly estimating the 3D motion, 3D shape, and
appearance of highly motion-blurred objects from a video. To this end, we model
the blurred appearance of a fast moving object in a generative fashion by
parametrizing its 3D position, rotation, velocity, acceleration, bounces,
shape, and texture over the duration of a predefined time window spanning
multiple frames. Using differentiable rendering, we are able to estimate all
parameters by minimizing the pixel-wise reprojection error to the input video
via backpropagating through a rendering pipeline that accounts for motion blur
by averaging the graphics output over short time intervals. For that purpose,
we also estimate the camera exposure gap time within the same optimization. To
account for abrupt motion changes like bounces, we model the motion trajectory
as a piece-wise polynomial, and we are able to estimate the specific time of
the bounce at sub-frame accuracy. Experiments on established benchmark datasets
demonstrate that our method outperforms previous methods for fast moving object
deblurring and 3D reconstruction.Comment: CVPR 2022 camera-read
Tracking by Deblatting
Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects travel a considerable distance during exposure time of a single frame, and therefore, their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by general trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. By postprocessing, non-causal Tracking by Deblatting estimates continuous, complete, and accurate object trajectories for the whole sequence. Tracked objects are precisely localized with higher temporal resolution than by conventional trackers. Energy minimization by dynamic programming is used to detect abrupt changes of motion, called bounces. High-order polynomials are then fitted to smooth trajectory segments between bounces. The output is a continuous trajectory function that assigns location for every real-valued time stamp from zero to the number of frames. The proposed algorithm was evaluated on a newly created dataset of videos from a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms the baselines both in recall and trajectory accuracy. Additionally, we show that from the trajectory function precise physical calculations are possible, such as radius, gravity, and sub-frame object velocity. Velocity estimation is compared to the high-speed camera measurements and radars. Results show high performance of the proposed method in terms of Trajectory-IoU, recall, and velocity estimation.ISSN:0920-5691ISSN:1573-140