1,374 research outputs found

    Depth from the visual motion of a planar target induced by zooming

    Get PDF
    Robot egomotion can be estimated from an acquired video stream up to the scale of the scene. To remove this uncertainty (and obtain true egomotion), a distance within the scene needs to be known. If no a priori knowledge on the scene is assumed, the usual solution is to derive “in some way” the initial distance from the camera to a target object. This paper proposes a new, very simple way to obtain such a distance, when a zooming camera is available and there is a planar target in the scene. Similarly to “two-grid calibration” algorithms, no estimation of the camera parameters is required, and no assumption on the optical axis stability between the different focal lengths is needed. Quite the reverse, the non stability of the optical axis between the different focal lengths is the key ingredient that enables to derive our depth estimate, by applying a result in projective geometry. Experiments carried out on a mobile robot platform show the promise of the approach.Peer Reviewe

    Zoom control to compensate camera translation within a robot egomotion estimation approach

    Get PDF
    We previously proposed a method to estimate robot egomotion from the deformation of a contour in the images acquired by a robot-mounted camera [2, 1]. The fact that the contour should always be viewed under weak-perspective conditions limits the applicability of the method. In this paper, we overcome this limitation by controlling the zoom so as to compensate for robot translation along the optic axis. Our control entails minimizing an error signal derived directly from image measurements, without requiring any 3D information. Moreover, contrarily to other 2D control approaches, no point correspondences are needed, since a parametric measure of contour deformation suffices. As a further advantage, the error signal is obtained as a byproduct of egomotion estimation and, therefore, it does not introduce any burden in the computation. Experimental results validate this zooming extension to the method. Moreover, robot translations are correctly computed, including those along the optic axis.Peer Reviewe

    Zoom control to compensate camera translation within a robot egomotion estimation approach

    Get PDF
    The final publication is available at link.springer.comZoom control has not received the attention one would expect in view of how it enriches the competences of a vision system. The possibility of changing the size of object projections not only permits analysing objects at a higher resolution, but it also may improve tracking and, therefore, subsequent 3D motion estimation and reconstruction results. Of further interest to us, zoom control enables much larger camera motions, while fixating on the same target, than it would be possible with fixed focal length cameras.This work is partially funded by the EU PACO-PLUS project FP6-2004-IST- 4-27657. The authors thank Gabriel Pi for their contribution in preparing the experiments.Peer ReviewedPostprint (author's final draft

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Monocular object pose computation with the foveal-peripheral camera of the humanoid robot Armar-III

    Get PDF
    Active contour modelling is useful to fit non-textured objects, and algorithms have been developed to recover the motion of an object and its uncertainty. Here we show that these algorithms can be used also with point features matched in textured objects, and that active contours and point matches complement in a natural way. In the same manner we also show that depth-from-zoom algorithms, developed for zooming cameras, can be exploited also in the foveal-peripheral eye configuration present in the Armar-III humanoid robot.Peer Reviewe

    Image stitching with perspective-preserving warping

    Full text link
    Image stitching algorithms often adopt the global transformation, such as homography, and work well for planar scenes or parallax free camera motions. However, these conditions are easily violated in practice. With casual camera motions, variable taken views, large depth change, or complex structures, it is a challenging task for stitching these images. The global transformation model often provides dreadful stitching results, such as misalignments or projective distortions, especially perspective distortion. To this end, we suggest a perspective-preserving warping for image stitching, which spatially combines local projective transformations and similarity transformation. By weighted combination scheme, our approach gradually extrapolates the local projective transformations of the overlapping regions into the non-overlapping regions, and thus the final warping can smoothly change from projective to similarity. The proposed method can provide satisfactory alignment accuracy as well as reduce the projective distortions and maintain the multi-perspective view. Experiments on a variety of challenging images confirm the efficiency of the approach.Comment: ISPRS 2016 - XXIII ISPRS Congress: Prague, Czech Republic, 201

    Looking at instructional animations through the frame of virtual camera

    Get PDF
    This thesis investigates the virtual camera and the function of camera movements in expository motion graphics for the purpose of instruction. Motion graphic design is a popular video production technique often employed to create instructional animations that present educational content through the persuasive presentation styles of the entertainment media industry. Adopting animation as a learning tool has distinct concerns and challenges when compared to its use in entertainment, and combining cognitive learning and emotive design aspects requires additional design considerations for each design element. The thesis will address how the camera movement-effect in supporting the narrative and aesthetic in instructional animations. It does this by investigating the virtual camera in terms of technical, semiotic and psychological level, culminating in a systematic categorization of functional camera movements on the basis of conceptual framework that describes hybrid integration of physical, cognitive and affective design aspects; and a creative work as a case study in the form of a comprehensive instructional animation that demonstrates practiced camera movements. Due to the correlation of the conceptual framework relied upon by the supplementary work with the techniques of effective instructional video production and conventional entertainment filmmaking, this thesis touches on the relationship between live action and animation in terms of directing and staging, concluding that the virtual camera as a design factor can be useful for supporting a narrative, evoking emotion and directing the audience’s focus while revealing, tracking and emphasizing informatio

    DeepSurveyCam — A Deep Ocean Optical Mapping System

    Get PDF
    Underwater photogrammetry and in particular systematic visual surveys of the deep sea are by far less developed than similar techniques on land or in space. The main challenges are the rough conditions with extremely high pressure, the accessibility of target areas (container and ship deployment of robust sensors, then diving for hours to the ocean floor), and the limitations of localization technologies (no GPS). The absence of natural light complicates energy budget considerations for deep diving flash-equipped drones. Refraction effects influence geometric image formation considerations with respect to field of view and focus, while attenuation and scattering degrade the radiometric image quality and limit the effective visibility. As an improvement on the stated issues, we present an AUV-based optical system intended for autonomous visual mapping of large areas of the seafloor (square kilometers) in up to 6000 m water depth. We compare it to existing systems and discuss tradeoffs such as resolution vs. mapped area and show results from a recent deployment with 90,000 mapped square meters of deep ocean floor

    Widening the view angle of auto-multiscopic display, denoising low brightness light field data and 3D reconstruction with delicate details

    Get PDF
    This doctoral thesis will present the results of my work into widening the viewing angle of the auto-multiscopic display, denoising light filed data the enhancement of captured light filed data captured in low light circumstance, and the attempts on reconstructing the subject surface with delicate details from microscopy image sets. The automultiscopic displays carefully control the distribution of emitted light over space, direction (angle) and time so that even a static image displayed can encode parallax across viewing directions (light field). This allows simultaneous observation by multiple viewers, each perceiving 3D from their own (correct) perspective. Currently, the illusion can only be effectively maintained over a narrow range of viewing angles. We propose and analyze a simple solution to widen the range of viewing angles for automultiscopic displays that use parallax barriers. We insert a refractive medium, with a high refractive index, between the display and parallax barriers. The inserted medium warps the exitant lightfield in a way that increases the potential viewing angle. We analyze the consequences of this warp and build a prototype with a 93% increase in the effective viewing angle. Additionally, we developed an integral images synthesis method that can address the refraction introduced by the inserted medium efficiently without the use of ray tracing. Capturing light field image with a short exposure time is preferable for eliminating the motion blur but it also leads to low brightness in a low light environment, which results in a low signal noise ratio. Most light field denoising methods apply regular 2D image denoising method to the sub-aperture images of a 4D light field directly, but it is not suitable for focused light field data whose sub-aperture image resolution is too low to be applied regular denoising methods. Therefore, we propose a deep learning denoising method based on micro lens images of focused light field to denoise the depth map and the original micro lens image set simultaneously, and achieved high quality total focused images from the low focused light field data. In areas like digital museum, remote researching, 3D reconstruction with delicate details of subjects is desired and technology like 3D reconstruction based on macro photography has been used successfully for various purposes. We intend to push it further by using microscope rather than macro lens, which is supposed to be able to capture the microscopy level details of the subject. We design and implement a scanning method which is able to capture microscopy image set from a curve surface based on robotic arm, and the 3D reconstruction method suitable for the microscopy image set
    corecore