20 research outputs found

    Algorithm for the reconstruction of dynamic objects in CT-scanning using optical flow

    Full text link
    Computed Tomography is a powerful imaging technique that allows non-destructive visualization of the interior of physical objects in different scientific areas. In traditional reconstruction techniques the object of interest is mostly considered to be static, which gives artefacts if the object is moving during the data acquisition. In this paper we present a method that, given only scan results of multiple successive scans, can estimate the motion and correct the CT-images for this motion assuming that the motion field is smooth over the complete domain using optical flow. The proposed method is validated on simulated scan data. The main contribution is that we show we can use the optical flow technique from imaging to correct CT-scan images for motion

    Street View Motion-from-Structure-from-Motion

    Full text link
    We describe a structure-from-motion framework that handles “generalized ” cameras, such as moving rolling-shutter cameras, and works at an unprecedented scale— billions of images covering millions of linear kilometers of roads—by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearance-augmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection. 1

    Rolling Shutter Stereo

    Get PDF
    A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data

    Structure and motion estimation from rolling shutter video

    Full text link
    The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion

    A High–Performance Parallel Implementation of the Chambolle Algorithm

    Get PDF
    The determination of the optical flow is a central problem in image processing, as it allows to describe how an image changes over time by means of a numerical vector field. The estimation of the optical flow is however a very complex problem, which has been faced using many different mathematical approaches. A large body of work has been recently published about variational methods, following the technique for total variation minimization proposed by Chambolle. Still, their hardware implementations do not offer good performances in terms of frames that can be processed per time unit, mainly because of the complex dependency scheme among the data. In this work, we propose a highly parallel and accelerated FPGA implementation of the Chambolle algorithm, which splits the original image into a set of overlapping sub-frames and efficiently exploits the reuse of intermediate results. We validate our hardware on large frames (up to 1024 × 768), and the proposed approach largely outperforms the state-of-the-art implementations, reaching up to 76× speedups as well as realtime frame rates even at high resolutions

    Stabilizing cell phone video using inertial measurement sensors. In:

    Get PDF
    Abstract We present a system that rectifies and stabilizes video sequences on mobile devices with rolling-shutter cameras. The system corrects for rolling-shutter distortions using measurements from accelerometer and gyroscope sensors, and a 3D rotational distortion model. In order to obtain a stabilized video, and at the same time keep most content in view, we propose an adaptive low-pass filter algorithm to obtain the output camera trajectory. The accuracy of the orientation estimates has been evaluated experimentally using ground truth data from a motion capture system. We have conducted a user study, where the output from our system, implemented in iOS, has been compared to that of three other applications, as well as to the uncorrected video. The study shows that users prefer our sensor-based system

    A high-performance parallel implementation of the Chambolle algorithm

    Full text link

    Vision-Aided Inertial Navigation

    Get PDF
    This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked

    Quality Assessment of Mobile Phone Video Stabilization

    Get PDF
    Smartphone cameras are used more than ever for photography and videography. This has driven mobile phone manufacturers to develop and enhance cameras in their mobile phones. While mobile phone cameras have evolved a lot, many aspects of the mobile phone camera still have room for improvement. One is video stabilization which aims to remove unpleasant motion and artifacts from video. Many video stabilization methods for mobile phones exist. However, there is no standard video stabilization quality assessment (VSQA) framework for comparing the performance of the video stabilization methods. Huawei wanted to improve the video stabilization quality of their mobile phones by investigating video stabilization quality assessment. As a part of that endeavor, this work studies existing VSQA frameworks found in the literature and incorporates some of their ideas into a VSQA framework established in this work. The new VSQA framework consists of a repeatable laboratory environment and objective sharpness and motion metrics. To test the VSQA framework, videos were captured on multiple mobile phones in the laboratory environment. These videos were first subjectively evaluated to find issues that are noticeable by humans. Then the videos were objectively evaluated with the objective sharpness and motion metrics. The results show that the proposed VSQA framework can be used for comparing and ranking mobile devices. The VSQA framework successfully identifies the strengths and weaknesses of each tested device's video stabilization quality.Älypuhelimien kameroita käytetään nykyään valokuvaukseen enemmän kuin koskaan. Tämä on saanut älypuhelimien valmistajia kehittämään heidän puhelimiensa kameroita. Vaikka paljon edistystä on tapahtunut, niin moni älypuhelimen kameran osa-alueista kaipaa vielä kehitystä. Yksi heikoista osa-alueista on videostabilointi. Videostabiloinnin tarkoitus on poistaa videosta epämiellyttävä liike. Monia ratkaisuja löytyy, mutta mitään standardoitua tapaa vertailla eri stabilointi ratkaisuja ei ole. Huawei haluaa parantaa tuotteidensa videostabiloinnin laatua. Saavuttaakseen tämän tavoitteen, tässä työssä tehdään katsaus kirjallisuudesta löytyviä videostabiloinnin laadun mittausmenetelmiä ja jalostetaan näistä ideoita, joiden avulla kehitetään oma videonstabiloinnin laadun mittausmenetelmä. Menetelmä koostuu toistettavasta laboratorioympäristöstä, jossa voi kuvata heiluvia videoita eri älypuhelimilla. Näitä videoita vertaillaan objektiivisesti mittaamalla videoista terävyyttä ja liikkeen miellyttävyyttä. Työn videostabiloinnin laadun mittausmenetelmää testattiin kuvaamalla toistettavassa laboratorioympäristössä usealla älypuhelimella videoita, joissa on simuloitua käden tärinää. Ensin kuvattuja videoita arvioitiin ja vertailtiin subjektiivisesti, jotta niistä löytyisi ongelmat, joita videostabilointi ei ole onnistunut korjaamaan. Tämän jälkeen videoita arvioitiin objektiivisilla terävyys- ja liikemittareilla. Tulokset osoittavat, että työssä esitetty videostabiloinnin laadun mittausmenetelmää voidaan käyttää eri älypuhelimien videostabilointimenetelmien vertailuun. Työn mittausmenetelmä onnistui havaitsemaan eri video stabilointimenetelmien vahvuudet ja heikkoudet
    corecore