34,903 research outputs found

    NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters

    Full text link
    Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating high-quality images from novel viewpoints. Existing methods require a priori knowledge about extrinsic and intrinsic camera parameters. This limits their applicability to synthetic scenes, or real-world scenarios with the necessity of a preprocessing step. Current research on the joint optimization of camera parameters and NeRF focuses on refining noisy extrinsic camera parameters and often relies on the preprocessing of intrinsic camera parameters. Further approaches are limited to cover only one single camera intrinsic. To address these limitations, we propose a novel end-to-end trainable approach called NeRFtrinsic Four. We utilize Gaussian Fourier features to estimate extrinsic camera parameters and dynamically predict varying intrinsic camera parameters through the supervision of the projection error. Our approach outperforms existing joint optimization methods on LLFF and BLEFF. In addition to these existing datasets, we introduce a new dataset called iFF with varying intrinsic camera parameters. NeRFtrinsic Four is a step forward in joint optimization NeRF-based view synthesis and enables more realistic and flexible rendering in real-world scenarios with varying camera parameters

    Minimum Uncertainty Triangle Paths for Multi Camera Calibration

    Get PDF
    Multi camera systems become increasingly important in computer vision. For many applications, however, the system has to be calibrated, i.e. the intrinsic and extrinsic parameters of the cameras have to be determined. We present a method for calibrating the extrinsic parameters without any scene knowledge or user interaction. In particular, we assume known intrinsic parameters and one image from each camera as input

    The Influence of Autofocus Lenses in the Camera Calibration Process

    Full text link
    [EN] Camera calibration is a crucial step in robotics and computer vision. Accurate camera parameters are necessary to achieve robust applications. Nowadays, camera calibration process consists of adjusting a set of data to a pin-hole model, assuming that with a reprojection error close to zero, camera parameters are correct. Since all camera parameters are unknown, computed results are considered true. However, the pin-hole model does not represent the camera behavior accurately if the autofocus is considered. Real cameras with autofocus lenses change the focal length slightly to obtain sharp objects in the image, and this feature skews the calibration result if a unique pin-hole model is computed with a constant focal length. In this article, a deep analysis of the camera calibration process is done to detect and strengthen its weaknesses when autofocus lenses are used. To demonstrate that significant errors exist in computed extrinsic parameters, the camera is mounted in a robot arm to know true extrinsic camera parameters with an accuracy under 1 mm. It is also demonstrated that errors in extrinsic camera parameters are compensated with bias in intrinsic camera parameters. Since significant errors exist with autofocus lenses, a modification of the widely accepted camera calibration method using images of a planar template is presented. A pin-hole model with distance-dependent focal length is proposed to improve the calibration process substantially.Ricolfe Viala, C.; Esparza Peidro, A. (2021). The Influence of Autofocus Lenses in the Camera Calibration Process. IEEE Transactions on Instrumentation and Measurement. 70:1-15. https://doi.org/10.1109/TIM.2021.30557931157

    MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems

    Full text link
    Neural Radiance Fields (NeRF) employ multi-view images for 3D scene representation and have shown remarkable performance. As one of the primary sources of multi-view images, multi-camera systems encounter challenges such as varying intrinsic parameters and frequent pose changes. Most previous NeRF-based methods often assume a global unique camera and seldom consider scenarios with multiple cameras. Besides, some pose-robust methods still remain susceptible to suboptimal solutions when poses are poor initialized. In this paper, we propose MC-NeRF, a method can jointly optimize both intrinsic and extrinsic parameters for bundle-adjusting Neural Radiance Fields. Firstly, we conduct a theoretical analysis to tackle the degenerate case and coupling issue that arise from the joint optimization between intrinsic and extrinsic parameters. Secondly, based on the proposed solutions, we introduce an efficient calibration image acquisition scheme for multi-camera systems, including the design of calibration object. Lastly, we present a global end-to-end network with training sequence that enables the regression of intrinsic and extrinsic parameters, along with the rendering network. Moreover, most existing datasets are designed for unique camera, we create a new dataset that includes four different styles of multi-camera acquisition systems, allowing readers to generate custom datasets. Experiments confirm the effectiveness of our method when each image corresponds to different camera parameters. Specifically, we adopt up to 110 images with 110 different intrinsic and extrinsic parameters, to achieve 3D scene representation without providing initial poses. The Code and supplementary materials are available at https://in2-viaun.github.io/MC-NeRF.Comment: This manuscript is currently under revie

    Extrinisic Calibration of a Camera-Arm System Through Rotation Identification

    Get PDF
    Determining extrinsic calibration parameters is a necessity in any robotic system composed of actuators and cameras. Once a system is outside the lab environment, parameters must be determined without relying on outside artifacts such as calibration targets. We propose a method that relies on structured motion of an observed arm to recover extrinsic calibration parameters. Our method combines known arm kinematics with observations of conics in the image plane to calculate maximum-likelihood estimates for calibration extrinsics. This method is validated in simulation and tested against a real-world model, yielding results consistent with ruler-based estimates. Our method shows promise for estimating the pose of a camera relative to an articulated arm's end effector without requiring tedious measurements or external artifacts. Index Terms: robotics, hand-eye problem, self-calibration, structure from motio

    Ray3D: ray-based 3D human pose estimation for monocular absolute 3D localization

    Full text link
    In this paper, we propose a novel monocular ray-based 3D (Ray3D) absolute human pose estimation with calibrated camera. Accurate and generalizable absolute 3D human pose estimation from monocular 2D pose input is an ill-posed problem. To address this challenge, we convert the input from pixel space to 3D normalized rays. This conversion makes our approach robust to camera intrinsic parameter changes. To deal with the in-the-wild camera extrinsic parameter variations, Ray3D explicitly takes the camera extrinsic parameters as an input and jointly models the distribution between the 3D pose rays and camera extrinsic parameters. This novel network design is the key to the outstanding generalizability of Ray3D approach. To have a comprehensive understanding of how the camera intrinsic and extrinsic parameter variations affect the accuracy of absolute 3D key-point localization, we conduct in-depth systematic experiments on three single person 3D benchmarks as well as one synthetic benchmark. These experiments demonstrate that our method significantly outperforms existing state-of-the-art models. Our code and the synthetic dataset are available at https://github.com/YxZhxn/Ray3D .Comment: Accepted by CVPR 202

    TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo Camera for Automated Driving Vehicles with Uncertainty Analysis

    Get PDF
    In this paper, we present TEScalib, a novel extrinsic self-calibration approach of LiDAR and stereo camera using the geometric and photometric information of surrounding environments without any calibration targets for automated driving vehicles. Since LiDAR and stereo camera are widely used for sensor data fusion on automated driving vehicles, their extrinsic calibration is highly important. However, most of the LiDAR and stereo camera calibration approaches are mainly target-based and therefore time consuming. Even the newly developed targetless approaches in last years are either inaccurate or unsuitable for driving platforms. To address those problems, we introduce TEScalib. By applying a 3D mesh reconstruction-based point cloud registration, the geometric information is used to estimate the LiDAR to stereo camera extrinsic parameters accurately and robustly. To calibrate the stereo camera, a photometric error function is builded and the LiDAR depth is involved to transform key points from one camera to another. During driving, these two parts are processed iteratively. Besides that, we also propose an uncertainty analysis for reflecting the reliability of the estimated extrinsic parameters. Our TEScalib approach evaluated on the KITTI dataset achieves very promising results
    • …
    corecore