237 research outputs found
Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction
Current bundle adjustment solvers such as the Levenberg-Marquardt (LM)
algorithm are limited by the bottleneck in solving the Reduced Camera System
(RCS) whose dimension is proportional to the camera number. When the problem is
scaled up, this step is neither efficient in computation nor manageable for a
single compute node. In this work, we propose a stochastic bundle adjustment
algorithm which seeks to decompose the RCS approximately inside the LM
iterations to improve the efficiency and scalability. It first reformulates the
quadratic programming problem of an LM iteration based on the clustering of the
visibility graph by introducing the equality constraints across clusters. Then,
we propose to relax it into a chance constrained problem and solve it through
sampled convex program. The relaxation is intended to eliminate the
interdependence between clusters embodied by the constraints, so that a large
RCS can be decomposed into independent linear sub-problems. Numerical
experiments on unordered Internet image sets and sequential SLAM image sets, as
well as distributed experiments on large-scale datasets, have demonstrated the
high efficiency and scalability of the proposed approach. Codes are released at
https://github.com/zlthinker/STBA.Comment: Accepted by ECCV 202
CamP: Camera Preconditioning for Neural Radiance Fields
Neural Radiance Fields (NeRF) can be optimized to obtain high-fidelity 3D
scene reconstructions of objects and large-scale scenes. However, NeRFs require
accurate camera parameters as input -- inaccurate camera parameters result in
blurry renderings. Extrinsic and intrinsic camera parameters are usually
estimated using Structure-from-Motion (SfM) methods as a pre-processing step to
NeRF, but these techniques rarely yield perfect estimates. Thus, prior works
have proposed jointly optimizing camera parameters alongside a NeRF, but these
methods are prone to local minima in challenging settings. In this work, we
analyze how different camera parameterizations affect this joint optimization
problem, and observe that standard parameterizations exhibit large differences
in magnitude with respect to small perturbations, which can lead to an
ill-conditioned optimization problem. We propose using a proxy problem to
compute a whitening transform that eliminates the correlation between camera
parameters and normalizes their effects, and we propose to use this transform
as a preconditioner for the camera parameters during joint optimization. Our
preconditioned camera optimization significantly improves reconstruction
quality on scenes from the Mip-NeRF 360 dataset: we reduce error rates (RMSE)
by 67% compared to state-of-the-art NeRF approaches that do not optimize for
cameras like Zip-NeRF, and by 29% relative to state-of-the-art joint
optimization approaches using the camera parameterization of SCNeRF. Our
approach is easy to implement, does not significantly increase runtime, can be
applied to a wide variety of camera parameterizations, and can
straightforwardly be incorporated into other NeRF-like models.Comment: SIGGRAPH Asia 2023, Project page: https://camp-nerf.github.i
Recommended from our members
Parallel Bundle Adjustment of High Resolution Satellite Imagery
Bundle adjustment is the process of minimizing errors in camera and three-dimensional structure parameters. The bundle adjustment process is applicable to many areas of geospatial awareness, computer vision, robotics, and imaging, both terrestrial imaging and remote sensing. In the case of remote sensing and planetary imaging, current methods do not adequately address geographic areas consisting of both a large number of images and image observations. Other application domains focus on a single portion of the bundle adjustment process, the solution of a linear system, but ignore the computation of the coefficient matrix. In this thesis we propose a fully parallel approach to the bundle adjustment problem. This approach includes parallel computation of the required partial derivatives, which also addresses load-imbalance inherent in the problem, a parallel solution to the required linear system, and novel parallel preconditioning techniques for this system. Additionally we investigate the use of a relational database to enable fast recomputation due to image addition or removal. As other research has shown, preconditioning the linear system present in the bundle adjustment problem is critical. We present two novel, parallel preconditioners, also based on the geographic information of the input data. These preconditioners are specific to the planetary imaging application domain and address the specific matrix structure that arises in this area. We show that the parallel derivative methods achieve a high level of parallel efficiency and work well with the usage of a parallel, distributed memory, linear solver. The demonstrated preconditioners make a tangible reduction in the number of required solver iterations. Lastly, because these problems are solved many times for various applications, we present a database-backed method which stores derivative information, thereby easily allowing for projects to be re-run quickly, or modified slightly without a large recomputation cost. All of these elements result in a completely parallel bundle adjustment system capable of processing large geographic areas with millions of image observations
Wideband Super-resolution Imaging in Radio Interferometry via Low Rankness and Joint Average Sparsity Models (HyperSARA)
We propose a new approach within the versatile framework of convex
optimization to solve the radio-interferometric wideband imaging problem. Our
approach, dubbed HyperSARA, solves a sequence of weighted nuclear norm and l21
minimization problems promoting low rankness and joint average sparsity of the
wideband model cube. On the one hand, enforcing low rankness enhances the
overall resolution of the reconstructed model cube by exploiting the
correlation between the different channels. On the other hand, promoting joint
average sparsity improves the overall sensitivity by rejecting artefacts
present on the different channels. An adaptive Preconditioned Primal-Dual
algorithm is adopted to solve the minimization problem. The algorithmic
structure is highly scalable to large data sets and allows for imaging in the
presence of unknown noise levels and calibration errors. We showcase the
superior performance of the proposed approach, reflected in high-resolution
images on simulations and real VLA observations with respect to single channel
imaging and the CLEAN-based wideband imaging algorithm in the WSCLEAN software.
Our MATLAB code is available online on GITHUB
Reconstruction of 3D Points From Uncalibrated Underwater Video
This thesis presents a 3D reconstruction software pipeline that is capable of generating
point cloud data from uncalibrated underwater video. This research project was undertaken
as a partnership with 2G Robotics, and the pipeline described in this thesis will become
the 3D reconstruction engine for a software product that can generate photo-realistic 3D
models from underwater video. The pipeline proceeds in three stages: video tracking,
projective reconstruction, and autocalibration.
Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced
keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished
using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and
widely used method is well-suited to underwater video, which is taken by carefully piloted
and slow-moving underwater vehicles.
Projective reconstruction is the process of simultaneously calculating the motion of the
cameras and the 3D location of observed points in the scene. This is accomplished using
a geometric three-view technique. Results are presented
showing that the projective reconstruction algorithm detailed here compares favourably to
state-of-the-art methods.
Autocalibration is the process of transforming a projective reconstruction, which is not
suitable for visualization or measurement, into a metric space where it can be used. This
is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a
novel autocalibration algorithm. Results are shown for two existing cost function-based
methods in the literature which failed when applied to underwater video, as well as the
proposed hybrid method. The hybrid method combines the best parts of its two parent
methods, and produces good results on underwater video.
Final results are shown for the 3D reconstruction pipeline operating on short under-
water video sequences to produce visually accurate 3D point clouds of the scene, suitable
for photorealistic rendering. Although further work remains to extend and improve the
pipeline for operation on longer sequences, this thesis presents a proof-of-concept method
for 3D reconstruction from uncalibrated underwater video
Enhancement to Camera Calibration: Representation, Robust Statistics, and 3D Calibration Tool
This thesis demonstrates theenhancement to camera calibrationin three aspects: representation of pose, robust statistics and 3D calibration tool. Camera calibration is the reconstruction of digital camera information based on digital images of an object in 3D space, since the digital images are 2D projections of a 3D object onto the camera sensor. Camera calibration is the estimation of the interior orientation (IO) parameters and exterior orientation (EO) parameters of a digital camera. Camera calibration is an essential part of image metrology. If the quality of camera calibration cannot be guaranteed, neither can the reliability of the subsequent analysis and applications based on digital images.
The first enhancement of camera calibration is in representation of pose. A formal definition of singularity of representation is given mathematically. An example is offered to show how singularity can lead to difficulty or failure in optimization. The spherical coordinate system is introduced as a representation method instead of other widely-used representations. Thespherical coordinate systemrepresents camera poses according to camera calibration tool images in digital image processing. With the introduction of the v frame in digital images, the singularities of spherical coordinate system are demonstrated mathematically.
The application ofrobust statisticsin optimization is the second enhancement of camera calibration. In photogrammetry, it is typical to collect thousands of observed data points for bundle adjustment. Unexpected outliers in observed data are unavoidable, and thus, the algorithm accuracy may not reach our goal. The least squares estimator is a widely used estimation method in camera calibration, but its sensitivity to outliers makes the algorithm unreliable, and it can even fail to fit the observations. By closely analyzing and comparing the characteristics of the least squares estimator, robust estimators with alternative assumptions are shown to detect and de-weight outliers that are not well processed with the classical assumptions, and provide a reliable fit to the observations. Among all possible robust estimators, two robust estimators from M-estimator family are applied to optimization in existing camera calibration algorithm. The robustified method can considerably improve accuracy for camera calibration estimation.
Anew metric \bar{D}is introduced, which is the distance between two camera calibrations considering all of the estimated camera IO parameters. \bar{D} can be used to evaluate the performance among various estimators. After applying the robust estimator, the system improves the accuracy and performance in camera calibration up to 25\%. The influence of a robustified estimator modification is also considered. It is established that the modification has impact on the estimation accuracy.
The third enhancement is the design and application of a3D calibration toolfor data collection. An all-new 3D calibration tool is designed to improve camera calibration accuracy over the 2D calibration tool. The comparison of the 3D and 2D calibration tools is conducted experimentally and theoretically. The experimental analysis is based on camera calibration results and the corresponding \bar{D} matrix, which shows that the 3D calibration tool improves accuracy. The mathematical analysis is based on the calculated covariance matrix of camera calibration without other impact factors. The experimental and theoretical analyses show that the 3D calibration tool can obtain more accurate calibration results compared with the 2D calibration tool, establishing that a carefully designed 3D calibration tool will yield better estimates than a 2D calibration tool
Proton magnetic resonance spectroscopy in skeletal muscle: Experts' consensus recommendations
H-1-MR spectroscopy of skeletal muscle provides insight into metabolism that is not available noninvasively by other methods. The recommendations given in this article are intended to guide those who have basic experience in general MRS to the special application of H-1-MRS in skeletal muscle. The highly organized structure of skeletal muscle leads to effects that change spectral features far beyond simple peak heights, depending on the type and orientation of the muscle. Specific recommendations are given for the acquisition of three particular metabolites (intramyocellular lipids, carnosine and acetylcarnitine) and for preconditioning of experiments and instructions to study volunteers.Peer reviewe
- …