62,577 research outputs found

    Motion estimation from spheres

    Get PDF
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, v. 1, p. 1238-1243This paper addresses the problem of recovering epipolar geometry from spheres. Previous works have exploited epipolar tangencies induced by frontier points on the spheres for motion recovery. It will be shown in this paper that besides epipolar tangencies, N2 point features can be extracted from the apparent contours of the N spheres when N > 2. An algorithm for recovering the fundamental matrices from such point features and the epipolar tangencies from 3 or more spheres is developed, with the point features providing a homography over the view pairs and the epipolar tangencies determining the epipoles. In general, there will be two solutions to the locations of the epipoles. One of the solutions corresponds to the true camera configuration, while the other corresponds to a mirrored configuration. Several methods are proposed to select the right solution. Experiments on using 3 and 4 spheres demonstrate that our algorithm can be carried out easily and can achieve a high precision. © 2006 IEEE.published_or_final_versio

    Structureless Camera Motion Estimation of Unordered Omnidirectional Images

    Get PDF
    This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA. The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction 1.1 Motivation 1.1.1 Increasing Interest of Image-Based 3D Reconstruction 1.1.2 Underground Environments as Challenging Scenario 1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging 1.2 Issues 1.2.1 Directional versus Omnidirectional Image Acquisition 1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping 1.3 Contribution 1.4 Structure of this Work 2 Related Work 2.1 Visual Simultaneous Localization and Mapping 2.1.1 Visual Odometry 2.1.2 Pose Graph Optimization 2.2 Structure from Motion 2.2.1 Bundle Adjustment 2.2.2 Structureless Bundle Adjustment 2.3 Corresponding Issues 2.4 Proposed Reconstruction Pipeline 3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps 3.1 Types 3.2 Models 3.2.1 Unified Camera Model 3.2.2 Polynomal Camera Model 3.2.3 Spherical Camera Model 3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table 3.3.1 Lookup Table as Color Image 3.3.2 Lookup Interpolation 3.3.3 Depth Data Conversion 4 Calibration 4.1 Overview of Proposed Calibration Pipeline 4.2 Target Detection 4.3 Intrinsic Calibration 4.3.1 Selected Examples 4.4 Extrinsic Calibration 4.4.1 3D-2D Pose Estimation 4.4.2 2D-2D Pose Estimation 4.4.3 Pose Optimization 4.4.4 Uncertainty Estimation 4.4.5 PoseGraph Representation 4.4.6 Bundle Adjustment 4.4.7 Selected Examples 5 Full Omnidirectional Image Projections 5.1 Panoramic Image Stitching 5.2 World Map Projections 5.3 World Map Projection Generator for P2S-Maps 5.4 Conversion between Projections based on P2S-Maps 5.4.1 Proposed Workflow 5.4.2 Data Storage Format 5.4.3 Real World Example 6 Relations between Two Camera Spheres 6.1 Forward and Backward Projection 6.2 Triangulation 6.2.1 Linear Least Squares Method 6.2.2 Alternative Midpoint Method 6.3 Epipolar Geometry 6.4 Transformation Recovery from Essential Matrix 6.4.1 Cheirality 6.4.2 Standard Procedure 6.4.3 Simplified Procedure 6.4.4 Improved Procedure 6.5 Two-View Estimation 6.5.1 Evaluation Strategy 6.5.2 Error Metric 6.5.3 Evaluation of Estimation Algorithms 6.5.4 Concluding Remarks 6.6 Two-View Optimization 6.6.1 Epipolar-Based Error Distances 6.6.2 Projection-Based Error Distances 6.6.3 Comparison between Error Distances 6.7 Two-View Translation Scaling 6.7.1 Linear Least Squares Estimation 6.7.2 Non-Linear Least Squares Optimization 6.7.3 Comparison between Initial and Optimized Scaling Factor 6.8 Homography to Identify Degeneracies 6.8.1 Homography for Spherical Cameras 6.8.2 Homography Estimation 6.8.3 Homography Optimization 6.8.4 Homography and Pure Rotation 6.8.5 Homography in Epipolar Geometry 7 Relations between Three Camera Spheres 7.1 Three View Geometry 7.2 Crossing Epipolar Planes Geometry 7.3 Trifocal Geometry 7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes 7.5 Translation Ratio between Up-To-Scale Two-View Transformations 7.5.1 Structureless Determination Approaches 7.5.2 Structure-Based Determination Approaches 7.5.3 Comparison between Proposed Approaches 8 Pose Graphs 8.1 Optimization Principle 8.2 Solvers 8.2.1 Additional Graph Solvers 8.2.2 False Loop Closure Detection 8.3 Pose Graph Generation 8.3.1 Generation of Synthetic Pose Graph Data 8.3.2 Optimization of Synthetic Pose Graph Data 9 Structureless Camera Motion Estimation 9.1 SCME Pipeline 9.2 Determination of Two-View Translation Scale Factors 9.3 Integration of Depth Data 9.4 Integration of Extrinsic Camera Constraints 10 Camera Motion Estimation Results 10.1 Directional Camera Images 10.2 Omnidirectional Camera Images 11 Conclusion 11.1 Summary 11.2 Outlook and Future Work Appendices A.1 Additional Extrinsic Calibration Results A.2 Linear Least Squares Scaling A.3 Proof Rank Deficiency A.4 Alternative Derivation Midpoint Method A.5 Simplification of Depth Calculation A.6 Relation between Epipolar and Circumferential Constraint A.7 Covariance Estimation A.8 Uncertainty Estimation from Epipolar Geometry A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation A.11 Depth from Adjoining Two-View Geometries A.12 Alternative Three-View Derivation A.12.1 Second Derivation Approach A.12.2 Third Derivation Approach A.13 Relation between Trifocal Geometry and Alternative Midpoint Method A.14 Additional Pose Graph Generation Examples A.15 Pose Graph Solver Settings A.16 Additional Pose Graph Optimization Examples Bibliograph

    Correspondenceless Structure from Motion

    Get PDF
    We present a novel approach for the estimation of 3D-motion directly from two images using the Radon transform. The feasibility of any camera motion is computed by integrating over all feature pairs that satisfy the epipolar constraint. This integration is equivalent to taking the inner product of a similarity function on feature pairs with a Dirac function embedding the epipolar constraint. The maxima in this five dimensional motion space will correspond to compatible rigid motions. The main novelty is in the realization that the Radon transform is a filtering operator: If we assume that the similarity and Dirac functions are defined on spheres and the epipolar constraint is a group action of rotations on spheres, then the Radon transform is a correlation integral. We propose a new algorithm to compute this integral from the spherical Fourier transform of the similarity and Dirac functions. Generating the similarity function now becomes a preprocessing step which reduces the complexity of the Radon computation by a factor equal to the number of feature pairs processed. The strength of the algorithm is in avoiding a commitment to correspondences, thus being robust to erroneous feature detection, outliers, and multiple motions

    Freely falling 2-surfaces and the quasi-local energy

    Full text link
    We derive an expression for effective gravitational mass for any closed spacelike 2-surface. This effective gravitational energy is defined directly through the geometrical quantity of the freely falling 2-surface and thus is well adapted to intuitive expectation that the gravitational mass should be determined by the motion of test body moving freely in gravitational field. We find that this effective gravitational mass has reasonable positive value for a small sphere in the non-vacuum space-times and can be negative for vacuum case. Further, this effective gravitational energy is compared with the quasi-local energy based on the (2+2)(2+2) formalism of the General Relativity. Although some gauge freedoms exist, analytic expressions of the quasi-local energy for vacuum cases are same as the effective gravitational mass. Especially, we see that the contribution from the cosmological constant is the same in general cases.Comment: 11 pages, no figures, REVTeX. Estimation of the effective mass of small spheres in non-vaccum spacetime and Schwarzschild spacetime are added. The negativity of the latter is discusse

    Configurational entropy of hard spheres

    Full text link
    We numerically calculate the configurational entropy S_conf of a binary mixture of hard spheres, by using a perturbed Hamiltonian method trapping the system inside a given state, which requires less assumptions than the previous methods [R.J. Speedy, Mol. Phys. 95, 169 (1998)]. We find that S_conf is a decreasing function of packing fraction f and extrapolates to zero at the Kauzmann packing fraction f_K = 0.62, suggesting the possibility of an ideal glass-transition for hard spheres system. Finally, the Adam-Gibbs relation is found to hold.Comment: 10 pages, 6 figure

    Stability of Bottom Armoring Under the Attack of Solitary Waves

    Get PDF
    An empirical relationship is presented for the incipient motion of bottom material under solitary waves. Two special cases of bottom material are considered: particles of arbitrary shape, and isolated sphere resting on top of a bed of tightly packed spheres. The amount of motion in the bed of particles of arbitrary shape is shown to depend on a dimensionless shear stress, similar to the Shields parameter. The mean resistance coefficient used in estimating this parameter is derived from considerations of energy dissipation, and is obtained from measurements of the attenuation of waves along a channel. A theoretical expression for the mean resistance coefficient is developed for the case of laminar flow from the linearized boundary layer equations and is verified by experiments. For the case of a single sphere resting on top of a bed of spheres, the analysis is based on the hypothesis that at incipient motion the hydrodynamic moments which tend to remove the sphere are equal to the restoring moment due to gravity which tends to keep it in its place. It is shown that the estimation of the hydrodynamic forces, based on an approach similar to the so-called "Morison's formula", in which the drag, lift, and inertia coefficients are independent of each other, is inaccurate. Alternatively, a single coefficient incorporating both drag, inertia, and lift effects is employed. Approximate values of this coefficient are described by an empirical relationship which is obtained from the experimental results. A review of existing theories of the solitary wave is presented and an experimental study is conducted in order to determine which theory should be used in the theoretical analysis of the incipient motion of bottom material. Experiments were conducted in the laboratory in order to determine the mean resistance coefficient of the bottom under solitary waves, and in order to obtain a relationship defining the incipient motion of bottom material. All the experiments were conducted in a wave tank 40 m long, 110 cm wide with water depths varying from 7 cm to 42 cm. The mean resistance coefficient was obtained from measurements of the attenuation of waves along an 18 m section of the wave tank. Experiments were conducted with a smooth bottom and with the bottom roughened with a layer of rock. The incipient motion of particles of arbitrary shape was studied by measuring the amount of motion in a 91 cm x 50 cm section covered with a 15.9 mm thick layer of material. The materials used had different densities and mean diameters. The incipient motion of spheres was observed for spheres of different diameters and densities placed on a bed of tightly packed spheres. The experiments were conducted with various water depths, and with wave height-to-water depth ratios varying from small values up to that for breaking of the wave. It was found that: (a) The theories of Boussinesq (1872) and McCowan (1891) describe the solitary wave fairly accurately. However, the differences between these theories are large when used to predict the forces which are exerted on objects on the bottom, and it was not established which theory describes these forces better. (b) The mean resistance coefficient for a rough turbulent flow under solitary waves can be described as a function of Ds, h, and H, where Ds is the mean diameter of the roughness particles, h is the water depth, and H is the wave height. (c) Small errors in the determination of the dimensionless shear stress for incipient motion of rocks result in large errors in the evaluation of the diameter of the rock required for incipient motion. However, it was found that the empirical relationship for the incipient motion of spheres can be used to determine the size of rock of arbitrary shape for incipient motion under a given wave, provided the angle of friction of the rock can be determined accurately.</p

    Radon-based Structure from Motion Without Correspondences

    Get PDF
    We present a novel approach for the estimation of 3Dmotion directly from two images using the Radon transform. We assume a similarity function defined on the crossproduct of two images which assigns a weight to all feature pairs. This similarity function is integrated over all feature pairs that satisfy the epipolar constraint. This integration is equivalent to filtering the similarity function with a Dirac function embedding the epipolar constraint. The result of this convolution is a function of the five unknownmotion parameters with maxima at the positions of compatible rigid motions. The breakthrough is in the realization that the Radon transform is a filtering operator: If we assume that images are defined on spheres and the epipolar constraint is a group action of two rotations on two spheres, then the Radon transform is a convolution/correlation integral. We propose a new algorithm to compute this integral from the spherical harmonics of the similarity and Dirac functions. The resulting resolution in the motion space depends on the bandwidth we keep from the spherical transform. The strength of the algorithm is in avoiding a commitment to correspondences, thus being robust to erroneous feature detection, outliers, and multiple motions. The algorithm has been tested in sequences of real omnidirectional images and it outperforms correspondence-based structure from motion

    Disordered jammed packings of frictionless spheres

    Full text link
    At low volume fraction, disordered arrangements of frictionless spheres are found in un--jammed states unable to support applied stresses, while at high volume fraction they are found in jammed states with mechanical strength. Here we show, focusing on the hard sphere zero pressure limit, that the transition between un-jammed and jammed states does not occur at a single value of the volume fraction, but in a whole volume fraction range. This result is obtained via the direct numerical construction of disordered jammed states with a volume fraction varying between two limits, 0.6360.636 and 0.6460.646. We identify these limits with the random loose packing volume fraction \rl and the random close packing volume fraction \rc of frictionless spheres, respectively

    Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model

    Full text link
    Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.Comment: 8 pages, Accepted version of paper published at 3DV 201
    • …
    corecore