31 research outputs found

    Selecting surface features for accurate multi-camera surface reconstruction

    Get PDF
    This paper proposes a novel feature detector for selecting local textures that are suitable for accurate multi-camera surface reconstruction, and in particular planar patch fitting techniques. This approach is in contrast to conventional feature detectors, which focus on repeatability under scale and affine transformations rather than suitability for multi-camera reconstruction techniques. The proposed detector selects local textures that are sensitive to affine transformations, which is a fundamental requirement for accurate patch fitting. The proposed detector is evaluated against the SIFT detector on a synthetic dataset and the fitted patches are compared against ground truth. The experiments show that patches originating from the proposed detector are fitted more accurately to the visible surfaces than those originating from SIFT keypoints. In addition, the detector is evaluated on a performance capture studio dataset to show the real-world application of the proposed detector

    Selecting surface features for accurate multi-camera surface reconstruction

    Get PDF
    This paper proposes a novel feature detector for selecting local textures that are suitable for accurate multi-camera surface reconstruction, and in particular planar patch fitting techniques. This approach is in contrast to conventional feature detectors, which focus on repeatability under scale and affine transformations rather than suitability for multi-camera reconstruction techniques. The proposed detector selects local textures that are sensitive to affine transformations, which is a fundamental requirement for accurate patch fitting. The proposed detector is evaluated against the SIFT detector on a synthetic dataset and the fitted patches are compared against ground truth. The experiments show that patches originating from the proposed detector are fitted more accurately to the visible surfaces than those originating from SIFT keypoints. In addition, the detector is evaluated on a performance capture studio dataset to show the real-world application of the proposed detector

    Calibración extrínseca de un conjunto de cámaras RGB-D sobre un robot móvil

    Get PDF
    La aparición de las cáqmaras RGB-D como sensores robóticos de bajo coste ha supuesto la inclusión habitual de varios de estos dispositivos en una creciente cantidad de vehiculos y robots. En estos casos, la calibraci on precisa de las transformaciones espaciales existentes entre las c amaras del mismo robot es de capital importancia a la hora de obtener medidas ables del entorno. Este articulo evalua el metodo de calibracion con formula cerrada descrito en [7] y lo extiende con una propuesta alternativa basada en un m etodo iterativo y una extension robusta de este ultimo en dos escenarios: i) un entorno simulado con cambios en el nivel de ruido de las observaciones, en el numero de observaciones obtenidas, en la proporcion de outliers y en las posiciones relativas de las c amaras, y ii) una con guraci on particular de 3 c amaras RGB- D sobre un robot real. Los resultados de la evaluacion muestran una mayor precision para nuestra propuesta iterativa robusta en todos los escenarios analizados. El codigo fuente de la implementacion de estos metodos en C++ se proporciona para su uso publico.Proyecto PROMOVE:DPI2014-55826-R (MINECO). Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Calibration of multiple cameras for large-scale experiments using a freely moving calibration target

    Get PDF
    Abstract: Obtaining accurate experimental data from Lagrangian tracking and tomographic velocimetry requires an accurate camera calibration consistent over multiple views. Established calibration procedures are often challenging to implement when the length scale of the measurement volume exceeds that of a typical laboratory experiment. Here, we combine tools developed in computer vision and non-linear camera mappings used in experimental fluid mechanics, to successfully calibrate a four-camera setup that is imaging inside a large tank of dimensions ∼10×25×6m3. The calibration procedure uses a planar checkerboard that is arbitrarily positioned at unknown locations and orientations. The method can be applied to any number of cameras. The parameters of the calibration yields direct estimates of the positions and orientations of the four cameras as well as the focal lengths of the lenses. These parameters are used to assess the quality of the calibration. The calibration allows us to perform accurate and consistent linear ray-tracing, which we use to triangulate and track fish inside the large tank. An open-source implementation of the calibration in Matlab is available. Graphic abstract: [Figure not available: see fulltext.]

    Computer Vision/Computer Graphics Collaboration Techniques

    Full text link

    A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds

    Multi-insect visual tracking and system identification techniques for inflight feedback interaction analysis

    Get PDF
    Individual insects flying in crowded assemblies perform complex aerial maneuvers by small changes in their wing motions. To understand the individual feedback rules that permit these fast, adaptive behaviors in group flight, a high-speed tracking system is needed that is capable of simultaneously tracking both body motions and these more subtle wing motion changes for multiple insects, extending tracking beyond the previous focus on individual insects to multiple insects. In this system, we have extended our capability to track multiple insects using high speed cameras (9000 fps). To improve the biological validity of laboratory experiments, we tested this measurement system with Apis mellifera foragers habituated to transit flights through a test chamber. Processing steps consist of data association, hull reconstruction, and segmentation. An analysis based on multiple flight trajectories is presented, including the differences in flight in open and confined areas containing multiple insects and the differences due to ethanol treatment. A system identification framework applicable to extracting the interaction rules in multi-agent insect trajectories is developed
    corecore