9,712 research outputs found
The real time rolling shutter
From an early age children are often told either, you are creative you should do art but stay away from science and maths. Or that you are mathematical you should do science but you're not that creative. Compounding this there also exist some traditional barriers of artistic rhetoric that say, "don't touch, don't think and don't be creative, we've already done that for you, you can just look...". The Real Time Rolling Shutter is part of a collaborative Art/Science partnership whose core tenets are in complete contrast to this. The Art/Science exhibitions we have created have invited the public to become part of the exhibition by utilising augmented digital mirrors, Kinects, feed-back camera and projector systems and augmented reality perception helmets. The fundamental underlying principles we are trying to adhere to are to foster curiosity, intrigue, wonderment and amazement and we endeavour to draw the audience into the interactive nature of our exhibits and exclaim to everyone that you can be what ever you chose to be, and that everyone can be creative, everyone can be an artist, everyone can be a scientist... all it takes is an inquisitive mind, so come and explore the real-time rolling shutter and be creative
Minimal Solvers for Monocular Rolling Shutter Compensation under Ackermann Motion
Modern automotive vehicles are often equipped with a budget commercial
rolling shutter camera. These devices often produce distorted images due to the
inter-row delay of the camera while capturing the image. Recent methods for
monocular rolling shutter motion compensation utilize blur kernel and the
straightness property of line segments. However, these methods are limited to
handling rotational motion and also are not fast enough to operate in real
time. In this paper, we propose a minimal solver for the rolling shutter motion
compensation which assumes known vertical direction of the camera. Thanks to
the Ackermann motion model of vehicles which consists of only two motion
parameters, and two parameters for the simplified depth assumption that lead to
a 4-line algorithm. The proposed minimal solver estimates the rolling shutter
camera motion efficiently and accurately. The extensive experiments on real and
simulated datasets demonstrate the benefits of our approach in terms of
qualitative and quantitative results.Comment: Submitted to WACV 201
Rolling Shutter Stereo
A huge fraction of cameras used nowadays is based on
CMOS sensors with a rolling shutter that exposes the image
line by line. For dynamic scenes/cameras this introduces
undesired effects like stretch, shear and wobble. It has been
shown earlier that rotational shake induced rolling shutter
effects in hand-held cell phone capture can be compensated
based on an estimate of the camera rotation. In contrast, we
analyse the case of significant camera motion, e.g. where
a bypassing streetlevel capture vehicle uses a rolling shutter
camera in a 3D reconstruction framework. The introduced
error is depth dependent and cannot be compensated
based on camera motion/rotation alone, invalidating also
rectification for stereo camera systems. On top, significant
lens distortion as often present in wide angle cameras intertwines
with rolling shutter effects as it changes the time
at which a certain 3D point is seen. We show that naive
3D reconstructions (assuming global shutter) will deliver
biased geometry already for very mild assumptions on vehicle
speed and resolution. We then develop rolling shutter
dense multiview stereo algorithms that solve for time of exposure
and depth at the same time, even in the presence of
lens distortion and perform an evaluation on ground truth
laser scan models as well as on real street-level data
Direct Sparse Odometry with Rolling Shutter
Neglecting the effects of rolling-shutter cameras for visual odometry (VO)
severely degrades accuracy and robustness. In this paper, we propose a novel
direct monocular VO method that incorporates a rolling-shutter model. Our
approach extends direct sparse odometry which performs direct bundle adjustment
of a set of recent keyframe poses and the depths of a sparse set of image
points. We estimate the velocity at each keyframe and impose a
constant-velocity prior for the optimization. In this way, we obtain a near
real-time, accurate direct VO method. Our approach achieves improved results on
challenging rolling-shutter sequences over state-of-the-art global-shutter VO
A unified rolling shutter and motion blur model for 3d visual registration
International audienceMotion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been considered previously in the context of monocular fullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to account for both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstrated for complex scenarios where both blur and shutter deformations are dominant
Structure and motion estimation from rolling shutter video
The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion
- …