997 research outputs found

    Overview of Available Open-Source Photogrammetric Software, its Use and Analysis

    Get PDF
    The current technological era provides a wide range of geodetic procedures and methods to document the actual state of objects on the Earth surface and at the same time course and shape of surface itself. Digital photogrammetry is one of these technologies, it allows the use of methods such as single-image photogrammetry, stereo photogrammetry (optical scanning), convergent imaging and SfM method (structure-from-motion) with final data in the form of point clouds, digital spatial models, orthophotos and other derived documents. Similar outputs can be obtained also by other technologies, mainly by terrestrial laser scanning, whilst each of the two technologies offers certain advantages and disadvantages. Especially purchasing and operating costs are one of the major drawbacks of laser scanning (while being an advantage of photogrammetry). In recent years, there has been a significant increase in development and creation of new, freely accessible (open-source) photogrammetric software, thus reducing these financial demands even more. The aim of this paper is to provide a basic overview of some of the most suitable open-source photogrammetric software and point out their strengths and weaknesses

    Data-Driven Radiometric Photo-Linearization

    Get PDF
    In computer vision and computer graphics, a photograph is often considered a photometric representation of a scene. However, for most camera models, the relation between recorded pixel value and the amount of light received on the sensor is not linear. This non-linear relationship is modeled by the camera response function which maps the scene radiance to the image brightness. This non-linear transformation is unknown, and it can only be recovered via a rigorous radiometric calibration process. Classic radiometric calibration methods typically estimate a camera response function from an exposure stack (i.e., an image sequence captured with different exposures from the same viewpoint and time). However, for photographs in large image collections for which we do not have control over the capture process, traditional radiometric calibration methods cannot be applied. This thesis details two novel data-driven radiometric photo-linearization methods suit- able for photographs captured with unknown camera settings and under uncontrolled conditions. First, a novel example-based radiometric linearization method is pro- posed, that takes as input a radiometrically linear photograph of a scene (i.e., exemplar), and a standard (radiometrically uncalibrated) image of the same scene potentially from a different viewpoint and/or under different lighting, and which produces a radiometrically linear version of the latter. Key to this method is the observation that for many patches, their change in appearance (from different viewpoints and lighting) forms a 1D linear subspace. This observation allows the problem to be reformulated in a form similar to classic radiometric calibration from an exposure stack. In addition, practical solutions are proposed to automatically select and align the best matching patches/correspondences between the two photographs, and to robustly reject outliers/unreliable matches. Second, CRF-net (or Camera Response Function net), a robust single image radiometric calibration method based on convolutional neural net- works (CNNs) is presented. The proposed network takes as input a single photograph, and outputs an estimate of the camera response function in the form of the 11 PCA coefficients for the EMoR camera response model. CRF-net is able to accurately recover the camera response function from a single photograph under a wide range of conditions

    The application of open-source and free photogrammetric software for the purposes of cultural heritage documentation

    Get PDF
    The documentation of cultural heritage is an essential part of appropriate care of historical monuments, representing a part of our history. At present, it represents the current issue, for which considerable funds are being spent, as well as for the documentation of immovable historical monuments in a form of castle ruins, among the others. Non-contact surveying technologies – terrestrial laser scanning and digital photogrammetry belong to the most commonly used technologies, by which suitable documentation can be obtained, however their use may be very costly. In recent years, various types of software products and web services based on the SfM (or MVS) method and developed as open-source software, or as a freely available and free service, relying on the basic principles of photogrammetry and computer vision, have started to get into the spotlight. By using the services and software, acquired digital images of a given object can be processed into a point cloud, serving directly as a final output or as a basis for further processing. The aim of this paper, based on images of various objects of the Slanec castle ruins obtained by the DSLR Pentax K5, is to assess the suitability of different types of open-source and free software and free web services and their reliability in terms of surface reconstruction and photo-texture quality for the purposes of castle ruins documentation

    Calibrage Radiométrique en Utilisant des Collections de Photos

    Get PDF
    National audienceAccess to the scene irradiance is a desirable feature in many computer vision algorithms. Applications like BRDF estimation, relighting or augmented reality need measurements of the object's photometric properties and the simplest method to get them is using a camera. However, the first step necessary to achieve this goal is the computation of the function that relates scene irradiance to image intensities. In this paper we propose to exploit the large variety of an object's appearances in photo collections to recover this non linear function for each of the cameras that acquired the available images. This process, also known as radiometric calibration, uses an unstructured set of images, to recover the camera's geometric calibration and a 3D scene model, using available methods. From this input, the camera response function is estimated for each image. This highly ill-posed problem is made tractable by using appropriate priors. The proposed approach is based on the empirical prior on camera response functions introduced by Grossberg and Nayar. Linear methods are proposed that allow to compute approximate solutions, which are then refined by non-linear least squares optimization.Mesurer l'irradiance émise par une scène est une caractéristique hautement souhaitable dans de nombreux algorithmes de vision par ordinateur. Des diverses applications comme, par exemple, l'estimation de la BRDF ou la réalité augmentée nécessitent des estimations des propriétés photométriques fiables. La méthode la plus simple pour les obtenir est avec l'aide d'un appareil photo. Toutefois, un premier pas est nécessaire pour atteindre cet objectif. Les valeurs d'intensités obtenues par un appareil photo doivent être liées aux valeurs d'irradiance émises par l'objet. Ce phénomène est modelé par une fonction non linéaire appelée la réponse radiométrique de la camera (CRF ou camera response function en anglais). Dans cet article, nous proposons d'exploiter la grande diversité des apparences d'un objet dans des collections de photos pour récupérer cette fonction non linéaire pour chacune des caméras en utilisant les images disponibles. Ce processus, également connu sous le nom de calibrage radiométrique, utilise un ensemble non structuré d'images pour récupérer la position des cameras et un modèle 3D de la scène. À partir de cette entrée, la fonction de réponse de la caméra est estimée pour chaque image. Ce problème très mal posé est uniquement traitable en se servant des à prioris appropriés. L'approche proposée est basée sur un modèle empirique pour paramétrer la réponse radiométrique des caméras introduit par Grossberg et Nayar. Des méthodes linéaires sont proposées. Ces approches permettent de calculer des solutions approximatives. Ensuite, des solutions plus exactes sont obtenues par des algorithmes d'optimisation non linéaire

    AUTOMATED AND ACCURATE ORIENTATION OF COMPLEX IMAGE SEQUENCES

    Get PDF
    The paper illustrates an automated methodology capable of finding tie points in different categories of images for a successive orientation and camera pose estimation procedure. The algorithmic implementation is encapsulated into a software called ATiPE. The entire procedure combines several algorithms of both Computer Vision (CV) and Photogrammetry in order to obtain accurate results in an automated way. Although there exist numerous efficient solutions for images taken with the traditional aerial block geometry, the complexity and diversity of image network geometry in close-range applications makes the automatic identification of tie points a very complicated task. The reported examples were made available for the 3D-ARCH 2011 conference and include images featuring different characteristics in terms of resolution, network geometry, calibration information and external constraints (ground control points, known distances). In addition, some further examples are shown, that demonstrate the capability of the orientation procedure to cope with a large variety of block configurations

    Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    Get PDF
    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation

    Photometric Reconstruction from Images: New Scenarios and Approaches for Uncontrolled Input Data

    Get PDF
    The changes in surface shading caused by varying illumination constitute an important cue to discern fine details and recognize the shape of textureless objects. Humans perform this task subconsciously, but it is challenging for a computer because several variables are unknown and intermix in the light distribution that actually reaches the eye or camera. In this work, we study algorithms and techniques to automatically recover the surface orientation and reflectance properties from multiple images of a scene. Photometric reconstruction techniques have been investigated for decades but are still restricted to industrial applications and research laboratories. Making these techniques work on more general, uncontrolled input without specialized capture setups has to be the next step but is not yet solved. We explore the current limits of photometric shape recovery in terms of input data and propose ways to overcome some of its restrictions. Many approaches, especially for non-Lambertian surfaces, rely on the illumination and the radiometric response function of the camera to be known. The accuracy such algorithms are able to achieve depends a lot on the quality of an a priori calibration of these parameters. We propose two techniques to estimate the position of a point light source, experimentally compare their performance with the commonly employed method, and draw conclusions which one to use in practice. We also discuss how well an absolute radiometric calibration can be performed on uncontrolled consumer images and show the application of a simple radiometric model to re-create night-time impressions from color images. A focus of this thesis is on Internet images which are an increasingly important source of data for computer vision and graphics applications. Concerning reconstructions in this setting we present novel approaches that are able to recover surface orientation from Internet webcam images. We explore two different strategies to overcome the challenges posed by this kind of input data. One technique exploits orientation consistency and matches appearance profiles on the target with a partial reconstruction of the scene. This avoids an explicit light calibration and works for any reflectance that is observed on the partial reference geometry. The other technique employs an outdoor lighting model and reflectance properties represented as parametric basis materials. It yields a richer scene representation consisting of shape and reflectance. This is very useful for the simulation of new impressions or editing operations, e.g. relighting. The proposed approach is the first that achieves such a reconstruction on webcam data. Both presentations are accompanied by evaluations on synthetic and real-world data showing qualitative and quantitative results. We also present a reconstruction approach for more controlled data in terms of the target scene. It relies on a reference object to relax a constraint common to many photometric stereo approaches: the fixed camera assumption. The proposed technique allows the camera and light source to vary freely in each image. It again avoids a light calibration step and can be applied to non-Lambertian surfaces. In summary, this thesis contributes to the calibration and to the reconstruction aspects of photometric techniques. We overcome challenges in both controlled and uncontrolled settings, with a focus on the latter. All proposed approaches are shown to operate also on non-Lambertian objects

    THE ULTRACAM STORY

    Get PDF
    corecore