7 research outputs found

    Camera self-calibration from unknown planar structures enforcing the multiview constraints between collineations

    Full text link

    An Efficient Solution to the Homography-Based Relative Pose Problem With a Common Reference Direction

    Get PDF
    International audienceIn this paper, we propose a novel approach to two-view minimal-case relative pose problems based on homography with a common reference direction. We explore the rank-1 constraint on the difference between the Euclidean homog-raphy matrix and the corresponding rotation, and propose an efficient two-step solution for solving both the calibrated and partially calibrated (unknown focal length) problems. We derive new 3.5-point, 3.5-point, 4-point solvers for two cameras such that the two focal lengths are unknown but equal, one of them is unknown, and both are unknown and possibly different, respectively. We present detailed analyses and comparisons with existing 6-and 7-point solvers, including results with smart phone images

    Flexible and User-Centric Camera Calibration using Planar Fiducial Markers

    Full text link
    The benefit of accurate camera calibration for recovering 3D structure from images is a well-studied topic. Recently 3D vision tools for end-user applications have become popular among large audiences, mostly unskilled in computer vision. This motivates the need for a flexible and user-centric camera calibration method which drastically releases the critical requirements on the calibration target and ensures that low-quality or faulty images provided by end users do not degrade the overall calibration and in effect the resulting 3D model. In this paper we present and advocate an approach to camera cal-ibration using fiducial markers, aiming at the accuracy of target calibration techniques without the requirement for a precise calibration pattern, to ease the calibration effort for the end-user. An extensive set of experiments with real images is presented which demonstrates improvements in the estimation of the parameters of the camera model as well as accuracy in the multi-view stereo reconstruction of large scale scenes. Pixel re-projection errors and ground truth errors obtained by our method are significantly lower compared to popular calibration routines, even though paper-printable and easy-to-use targets are employed.

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Reconstruction 3D personnalisée de la colonne vertébrale à partir d'images radiographiques non-calibrées

    Get PDF
    Les systèmes de reconstruction stéréo-radiographique 3D -- La colonne vertébrale -- La scoliose idiopathique adolescente -- Évolution des systèmes de reconstruction 3D -- Filtres de rehaussement d'images -- Techniques de segmentation -- Les méthodes de calibrage -- Les méthodes de reconstruction 3D -- Problématique, hypothèses, objectifs et méthode générale -- Three-dimensional reconstruction of the scoliotic spine and pelvis from uncalibrated biplanar X-ray images -- A versatile 3D reconstruction system of the spine and pelvis for clinical assessment of spinal deformities -- Simulation experiments -- Clinical validation -- A three-dimensional retrospective analysis of the evolution of spinal instrumentation for the correction of adolescent idiopathic scoliosis -- Auto-calibrage d'un système à rayons-X à partir de primitives de haut niveau -- Segmentation de la colonne vertébrale -- Approche hiérarchique d'auto-calibrage d'un système d'acquisition à rayons-X -- Personalized 3D reconstruction of the scoliotic spine from hybrid statistical and X-ray image-based models -- Validation protocol

    Estimació del moviment de robots mitjançant contorns actius

    Get PDF
    Aquesta tesi versa sobre l'estimació del moviment d'un robot mòbil a partir dels canvis en les imatges captades per una càmera muntada sobre el robot. El moviment es dedueix amb un algorisme prèviament proposat en el marc de la navegació qualitativa. Per tal d'emprar aquest algorisme en casos reals s'ha fet un estudi de la seva precisió. Per augmentar-ne l'aplicabilitat, s'ha adaptat l'algorisme al cas d'una càmera amb moviments d'orientació i de zoom.Quan els efectes perspectius no són importants, dues vistes d'una escena captades pel robot es poden relacionar amb una transformació afí (o afinitat), que normalment es calcula a partir de correspondències de punts. En aquesta tesi es vol seguir un enfoc alternatiu, i alhora complementari, fent servir la silueta d'un objecte modelada mitjançant un contorn actiu. El marc es el següent: a mesura que el robot es va movent, la projecció de l'objecte a la imatge va canviant i el contorn actiu es deforma convenientment per adaptar-s'hi; de les deformacions d'aquest contorn, expressades en espai de forma, se'n pot extreure el moviment del robot fins a un factor d'escala. Els contorns actius es caracteritzen per la rapidesa en la seva extracció i la seva robustesa a oclusions parcials. A més, un contorn és fàcil de trobar fins i tot en escenes poc texturades, on sovint és difícil trobar punts característics i la seva correspondència.La primera part d'aquest treball té l'objectiu de caracteritzar la precisió i la incertesa en l'estimació del moviment. Per avaluar la precisió, primer es duen a terme un parell d'experiències pràctiques, que mostren la potencialitat de l'algorisme en entorns reals i amb diferents robots. Estudiant la geometria epipolar que relaciona dues vistes d'un objecte planar es demostra que la direcció epipolar afí es pot recuperar en el cas que el moviment de la càmera estigui lliure de ciclorotació. Amb una bateria d'experiments, tant en simulació com reals, es fa servir la direcció epipolar per caracteritzar la precisió global de l'afinitat en diferents situacions, com ara, davant de diferents formes dels contorns, condicions de visualització extremes i soroll al sistema.Pel que fa a la incertesa, gràcies a que la implementació es basa en el filtre de Kalman, per a cada estimació del moviment també es té una estimació de la incertesa associada, però expressada en espai de forma. Per tal propagar la incertesa de l'espai de forma a l'espai de moviment 3D s'han seguit dos camins diferents: un analític i l'altre estadístic. Aquest estudi ha permès determinar quins graus de llibertat es recuperen amb més precisió, i quines correlacions existeixen entre les diferents components. Finalment, s'ha desenvolupat un algorisme que permet propagar la incertesa del moviment en temps de vídeo. Una de les limitacions més importants d'aquesta metodologia és que cal que la projecció de l'objecte estigui dins de la imatge i en condicions de visualització de perspectiva dèbil durant tota la seqüència. En la segona part d'aquest treball, s'estudia el seguiment de contorns actius en el marc de la visió activa per tal de superar aquesta limitació. És una relació natural, atès que el seguiment de contorns actius es pot veure com una tècnica per fixar el focus d'atenció. En primer lloc, s'han estudiat les propietats de les càmeres amb zoom i s'ha proposat un nou algorisme per determinar la profunditat de la càmera respecte a un objecte qualsevol. L'algorisme inclou un senzill calibratge geomètric que no implica cap coneixement sobre els paràmetres interns de la càmera. Finalment, per tal d'orientar la càmera adequadament, compensant en la mesura del possible els moviments del robot, s'ha desenvolupat un algorisme per al control dels mecanismes de zoom, capcineig i guinyada, i s'ha adaptat l'algorisme d'estimació del moviment incorporant-hi els girs coneguts del capcineig i la guinyada.This thesis deals with the motion estimation of a mobile robot from changes in the images acquired by a camera mounted on the robot itself. The motion is deduced with an algorithm previously proposed in the framework of qualitative navigation. In order to employ this algorithm in real situations, a study of its accuracy has been performed. Moreover, relationships with the active vision paradigm have been analyzed, leading to an increase in its applicability.When perspective effects are not significant, two views of a scene are related by an affine transformation (or affinity), that it is usually computed from point correspondences. In this thesis we explore an alternative and at the same time complementary approach, using the contour of an object modeled by means of an active contour. The framework is the following: when the robot moves, the projection of the object in the image changes and the active contour adapts conveniently to it; from the deformation of this contour, expressed in shape space, the robot egomotion can be extracted up to a scale factor. Active contours are characterized by the speed of their extraction and their robustness to partial occlusions. Moreover, a contour is easy to find even in poorly textured scenes, where often it is difficult to find point features and their correspondences.The goal of the first part of this work is to characterize the accuracy and the uncertainty in the motion estimation. Some practical experiences are carried out to evaluate the accuracy, showing the potentiality of the algorithm in real environments and with different robots. We have studied also the epipolar geometry relating two views of a planar object. We prove that the affine epipolar direction between two images can be recovered from a shape vector when the camera motion is free of cyclorotation. With a battery of simulated as well as real experiments, the epipolar direction allows us to analyze the global accuracy of the affinity in a variety of situations: different contour shapes, extreme visualization conditions and presence of noise.Regarding uncertainty, since the implementation is based on a Kalman filter, for each motion estimate we have also its covariance matrix expressed in shape space. In order to propagate the uncertainty from shape space to 3D motion space, two different approaches have been followed: an analytical and a statistical one. This study has allowed us to determine which degrees of freedom are recovered with more accuracy, and what correlations exist between the different motion components. Finally, an algorithm to propagate the motion uncertainty at video rate has been proposed.One of the most important limitations of this methodology is that the object must project onto the image under weak-perspective visualization conditions all along the sequence. In the second part of this work, active contour tracking is studied within the framework of active vision to overcome this limitation. Both relate naturally, as active contour tracking can be seen as a focus-of-attention strategy.First, the properties of zooming cameras are studied and a new algorithm is proposed to estimate the depth of the camera with respect to an object. The algorithm includes a simple geometric calibration that does not require any knowledge about the camera internal parameters.Finally, in order to orientate the camera so as to suitably compensate for robot motion when possible, a new algorithm has been proposed for the control of zoom, pan and tilt mechanisms, and the motion estimation algorithm has been updated conveniently to incorporate the active camera state information
    corecore