151 research outputs found
Structure and motion estimation from rolling shutter video
The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion
Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events
Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS)
images to an undistorted high frame-rate Global Shutter (GS) video is a
severely ill-posed problem, particularly when prior knowledge about
camera/object motions is unavailable. Commonly used artificial assumptions on
motion linearity and data-specific characteristics, regarding the temporal
dynamics information embedded in the RS scanlines, are prone to producing
sub-optimal solutions in real-world scenarios. To address this challenge, we
propose an event-based RS2GS framework within a self-supervised learning
paradigm that leverages the extremely high temporal resolution of event cameras
to provide accurate inter/intra-frame information. % In this paper, we propose
to leverage the event camera to provide inter/intra-frame information as the
emitted events have an extremely high temporal resolution and learn an
event-based RS2GS network within a self-supervised learning framework, where
real-world events and RS images can be exploited to alleviate the performance
degradation caused by the domain gap between the synthesized and real data.
Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed
to predict the per-pixel dynamic between arbitrary time intervals, including
the temporal transition and spatial translation. Exploring connections in terms
of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the
proposed E-IC, resulting in supervisions without ground-truth GS images.
Extensive evaluations over synthetic and real datasets demonstrate that the
proposed method achieves state-of-the-art and shows remarkable performance for
event-based RS2GS inversion in real-world scenarios. The dataset and code are
available at https://w3un.github.io/selfunroll/
A unified rolling shutter and motion blur model for 3d visual registration
International audienceMotion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been considered previously in the context of monocular fullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to account for both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstrated for complex scenarios where both blur and shutter deformations are dominant
- …