400 research outputs found

    Speeding up structure from motion on large scenes using parallelizable partitions

    Get PDF
    Structure from motion based 3D reconstruction takes a lot of time for large scenes which consist of thousands of input images. We propose a method that speeds up the reconstruction of large scenes by partitioning it into smaller scenes, and then recombining those. The main benefit here is that each subscene can be optimized in parallel. We present a widely usable subdivision method, and show that the difference between the result after partitioning and recombination, and the state of the art structure from motion reconstruction on the entire scene, is negligible

    Predicting Visual Overlap of Images Through Interpretable Non-Metric Box Embeddings

    Get PDF
    To what extent are two images picturing the same 3D surfaces? Even when this is a known scene, the answer typically requires an expensive search across scale space, with matching and geometric verification of large sets of local features. This expense is further multiplied when a query image is evaluated against a gallery, e.g. in visual relocalization. While we don't obviate the need for geometric verification, we propose an interpretable image-embedding that cuts the search in scale space to essentially a lookup. Our approach measures the asymmetric relation between two images. The model then learns a scene-specific measure of similarity, from training examples with known 3D visible-surface overlaps. The result is that we can quickly identify, for example, which test image is a close-up version of another, and by what scale factor. Subsequently, local features need only be detected at that scale. We validate our scene-specific model by showing how this embedding yields competitive image-matching results, while being simpler, faster, and also interpretable by humans.Comment: ECCV 202

    Unnecessary Image Pair Detection for a Large Scale Reconstruction

    Full text link

    Progressive Structure from Motion

    Full text link
    Structure from Motion or the sparse 3D reconstruction out of individual photos is a long studied topic in computer vision. Yet none of the existing reconstruction pipelines fully addresses a progressive scenario where images are only getting available during the reconstruction process and intermediate results are delivered to the user. Incremental pipelines are capable of growing a 3D model but often get stuck in local minima due to wrong (binding) decisions taken based on incomplete information. Global pipelines on the other hand need the access to the complete viewgraph and are not capable of delivering intermediate results. In this paper we propose a new reconstruction pipeline working in a progressive manner rather than in a batch processing scheme. The pipeline is able to recover from failed reconstructions in early stages, avoids to take binding decisions, delivers a progressive output and yet maintains the capabilities of existing pipelines. We demonstrate and evaluate our method on diverse challenging public and dedicated datasets including those with highly symmetric structures and compare to the state of the art.Comment: Accepted to ECCV 201

    Preserving the impossible: conservation of soft-sediment hominin footprint sites and strategies for three-dimensional digital data capture.

    Get PDF
    Human footprints provide some of the most publically emotive and tangible evidence of our ancestors. To the scientific community they provide evidence of stature, presence, behaviour and in the case of early hominins potential evidence with respect to the evolution of gait. While rare in the geological record the number of footprint sites has increased in recent years along with the analytical tools available for their study. Many of these sites are at risk from rapid erosion, including the Ileret footprints in northern Kenya which are second only in age to those at Laetoli (Tanzania). Unlithified, soft-sediment footprint sites such these pose a significant geoconservation challenge. In the first part of this paper conservation and preservation options are explored leading to the conclusion that to 'record and digitally rescue' provides the only viable approach. Key to such strategies is the increasing availability of three-dimensional data capture either via optical laser scanning and/or digital photogrammetry. Within the discipline there is a developing schism between those that favour one approach over the other and a requirement from geoconservationists and the scientific community for some form of objective appraisal of these alternatives is necessary. Consequently in the second part of this paper we evaluate these alternative approaches and the role they can play in a 'record and digitally rescue' conservation strategy. Using modern footprint data, digital models created via optical laser scanning are compared to those generated by state-of-the-art photogrammetry. Both methods give comparable although subtly different results. This data is evaluated alongside a review of field deployment issues to provide guidance to the community with respect to the factors which need to be considered in digital conservation of human/hominin footprints

    Autocalibration with the Minimum Number of Cameras with Known Pixel Shape

    Get PDF
    In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi

    Visual change detection on tunnel linings

    Get PDF
    We describe an automated system for detecting, localising, clustering and ranking visual changes on tunnel surfaces. The system is designed to provide assistance to expert human inspectors carrying out structural health monitoring and maintenance on ageing tunnel networks. A three-dimensional tunnel surface model is first recovered from a set of reference images using Structure from Motion techniques. New images are localised accurately within the model and changes are detected versus the reference images and model geometry. We formulate the problem of detecting changes probabilistically and evaluate the use of different feature maps and a novel geometric prior to achieve invariance to noise and nuisance sources such as parallax and lighting changes. A clustering and ranking method is proposed which efficiently presents detected changes and further improves the inspection efficiency. System performance is assessed on a real data set collected using a low-cost prototype capture device and labelled with ground truth. Results demonstrate that our system is a step towards higher frequency visual inspection at a reduced cost.The authors gratefully acknowledge the support by Toshiba Research Europe.This is the accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s00138-014-0648-8

    Zettawatt-Exawatt Lasers and Their Applications in Ultrastrong-Field Physics: High Energy Front

    Get PDF
    Since its birth, the laser has been extraordinarily effective in the study and applications of laser-matter interaction at the atomic and molecular level and in the nonlinear optics of the bound electron. In its early life, the laser was associated with the physics of electron volts and of the chemical bond. Over the past fifteen years, however, we have seen a surge in our ability to produce high intensities, five to six orders of magnitude higher than was possible before. At these intensities, particles, electrons and protons, acquire kinetic energy in the mega-electron-volt range through interaction with intense laser fields. This opens a new age for the laser, the age of nonlinear relativistic optics coupling even with nuclear physics. We suggest a path to reach an extremely high-intensity level 10262810^{26-28} W/cm2^2 in the coming decade, much beyond the current and near future intensity regime 102310^{23} W/cm2^2, taking advantage of the megajoule laser facilities. Such a laser at extreme high intensity could accelerate particles to frontiers of high energy, tera-electron-volt and peta-electron-volt, and would become a tool of fundamental physics encompassing particle physics, gravitational physics, nonlinear field theory, ultrahigh-pressure physics, astrophysics, and cosmology. We focus our attention on high-energy applications in particular and the possibility of merged reinforcement of high-energy physics and ultraintense laser.Comment: 25 pages. 1 figur
    corecore