40 research outputs found

    General and Nested Wiberg Minimization

    No full text
    Wiberg matrix factorization breaks a matrix Y into lowrank factors U and V by solving for V in closed form given U, linearizing V (U) about U, and iteratively minimizing ||Y − UV (U)||2 with respect to U only. This approach factors the matrix while effectively removing V from the minimization. Recently Eriksson and van den Hengel extended this approach to L1, minimizing ||Y − UV (U)||1. We generalize their approach beyond factorization to minimize an arbitrary function that is nonlinear in each of two sets of variables. We demonstrate the idea with a practical Wiberg algorithm for L1 bundle adjustment. We also show that one Wiberg minimization can be nested inside another, effectively removing two of three sets of variables from a minimization. We demonstrate this idea with a nested Wiberg algorithm for L1 projective bundle adjustment, solving for camera matrices, points, and projective depths. We also revisit L1 factorization, giving a greatly simplified presentation of Wiberg L1 factorization, and presenting a successive linear programming factorization algorithm. Successive linear programming outperforms L1 Wiberg for most large inputs, establishing a new state-of-the-art for for those cases. 1

    Downloaded from

    No full text
    Cameras and inertial sensors are each good candidates for autonomous vehicle navigation, modeling from video, and other applications that require six-degrees-of-freedom motion estimation. However, these sensors are also good candidates to be deployed together, since each can be used to resolve the ambiguities in estimated motion that result from using the other modality alone. In this paper, we consider the specific problem of estimating sensor motion and other unknowns from image, gyro, and accelerometer measurements, in environments without known fiducials. This paper targets applications where external positions references such as global positioning are not available, and focuses on the use of small and inexpensive inertial sensors, for applications where weight and cost requirements preclude the use of precision inertial navigation systems. We present two algorithms for estimating sensor motion from imag

    Motion estimation from image and inertial

    No full text
    measurement

    Restoration and Reconstruction of AVHRR Images

    Get PDF
    This paper describes the design of small convolution kernels for the restoration and reconstruction of Advanced Very High Resolution Radiometer (AVHRR) images. The kernels are small enough to be implemented efficiently by convolution, yet effectively correct degradations and increase apparent resolution. The kernel derivation is based on a comprehensive, end-to-end system model that accounts for scene statistics, image acquisition blur, sampling effects, sensor noise, and postfilter reconstruction. The design maximizes image fidelity subject to explicit constraints on the spatial support and resolution of the kernel. The kernels can be designed with h e r resolution than the image to perform partial reconstruction for geometric correction and other remapping operations. Experiments demonstrate that small kernels yield fidelity comparable to optimal unconstrained filters with less computation

    Optimal Motion Estimation From Visual and Inertial Measurements

    No full text
    Cameras and inertial sensors are good candidates to be deployed together for autonomous vehicle motion estimation, since each can be used to resolve the ambiguities in the estimated motion that results from using the other modality alone. We present an algorithm that computes optimal vehicle motion estimates by considering all of the measurements from a camera, rate gyro, and accelerometer simultaneously. Such optimal estimates are useful in their own right, and as a gold standard for the comparison of online algorithms

    Motion Estimation from Image and Inertial Measurements

    No full text

    Long-term motion estimation from images

    No full text
    Summary. Cameras are promising sensors for estimating the motion of autonomous vehicles without GPS and for automatic scene modeling. Furthermore, a wide variety of shape-from-motion algorithms exist for simultaneously estimating the camera’s six degree of freedom motion and the three-dimension structure of the scene, without prior assumptions about the camera’s motion or an existing map of the scene. However, existing shape-from-motion algorithms do not address the problem of accumulated long-term drift in the estimated motion and scene structure, which is critical in autonomous vehicle applications. The paper introduces a proof of concept system that exploits a new tracker, the variable state dimension filter (VSDF), and SIFT keypoints to recognize previously visited locations and limit drift in long-term camera motion estimates. The performance of this system on an extended image sequence is described

    DRAFT SUBMISSION – PLEASE DO NOT REDISTRIBUTE Motion Estimation from Image and Inertial Measurements (revised

    No full text
    Cameras and inertial sensors are each good candidates for autonomous vehicle navigation, modeling from video, and other applications that require six degree of freedom motion estimation. But, these sensors are also good candidates to be deployed together, since each can be used to resolve the ambiguities in estimated motion that result from using the other modality alone. In this paper, we consider the specific problem of estimating sensor motion and other unknowns from image, gyro, and accelerometer measurements, in environments without known fiducials. This work targets applications where external positions references such as global positioning are not available, and focuses on the use of small and inexpensive inertial sensors, for applications where weight and cost requirements preclude the use of precision inertial navigation systems. We present two algorithms for estimating sensor motion from image and inertial measurements. The first algorithm is a batch method, which produces estimates of the sensor motion, scene structure, and other unknowns using measurements from the entire observation sequence simultaneously. The second algorithm recovers sensor motion, scene structure, and other parameters recursively, and is suitable for use with long or “infinite ” sequences, in which no feature is always visible. We evaluate the accuracy of the algorithms and their sensitivity to their estimation parameters using a sequence of four experiments. These experiments focus on cases where estimates from image or inertial measurements alone are poor, on the relative advantage of using inertial measurements and omnidirectional images, and on long sequences in which the percentage of the image sequence in which individual features are visible is low.
    corecore