178 research outputs found

    Positioning in time and space: cost-effective exterior orientation for airborne archaeological photographs

    Get PDF
    Since manned, airborne aerial reconnaissance for archaeological purposes is often characterised by more-or-less random photographing of archaeological features on the Earth, the exact position and orientation of the camera during image acquisition becomes very important in an effective inventorying and interpretation workflow of these aerial photographs. Although the positioning is generally achieved by simultaneously logging the flight path or directly recording the camera's position with a GNSS receiver, this approach does not allow to record the necessary roll, pitch and yaw angles of the camera. The latter are essential elements for the complete exterior orientation of the camera, which allows – together with the inner orientation of the camera – to accurately define the portion of the Earth recorded in the photograph. This paper proposes a cost-effective, accurate and precise GNSS/IMU solution (image position: 2.5 m and orientation: 2°, both at 1σ) to record all essential exterior orientation parameters for the direct georeferencing of the images. After the introduction of the utilised hardware, this paper presents the developed software that allows recording and estimating these parameters. Furthermore, this direct georeferencing information can be embedded into the image's metadata. Subsequently, the first results of the estimation of the mounting calibration (i.e. the misalignment between the camera and GNSS/IMU coordinate frame) are provided. Furthermore, a comparison with a dedicated commercial photographic GNSS/IMU solution will prove the superiority of the introduced solution. Finally, an outlook on future tests and improvements finalises this article

    Camera re-calibration after zooming based on sets of conics

    Get PDF
    We describe a method to compute the internal parameters (focal and principal points) of a camera with known position and orientation, based on the observation of two or more conics on a known plane. The conics can even be degenerate (e.g. pairs of lines). The proposed method can be used to re-estimate the internal parameters of a fully calibrated camera after zooming to a new, unknown, focal length. It also allows estimating the internal parameters when a second, fully calibrated camera observes the same conics. The parameters estimated through the proposed method are coherent with the output of more traditional procedures that require a higher number of calibration images. A deep analysis of the geometrical configurations that influence the proposed method is also reported

    Application for photogrammetry of organisms

    Get PDF
    Single-camera photogrammetry is a well-established procedure to retrieve quantitative information from objects using photography. In biological sciences, photogrammetry is often applied to aid in morphometry studies, focusing on the comparative study of shapes and organisms. Two types of photogrammetry are used in morphometric studies: 2D photogrammetry, where distance and angle measurements are used to quantitatively describe attributes of an object, and 3D photogrammetry, where data on landmark coordinates are used to reconstruct an object true shape. Although there are excellent software tools for 3D photogrammetry available, software specifically designed to aid in the somewhat simpler 2D photogrammetry are lacking. Therefore, most studies applying 2D photogrammetry, still rely on manual acquisition of measurements from pictures, that must then be scaled to an appropriate measuring system. This is often a laborious multistep process, on most cases utilizing diverse software to complete different tasks. In addition to being time-consuming, it is also error-prone since measurement recording is often made manually. The present work aimed at tackling those issues by implementing a new cross-platform software able to integrate and streamline the photogrammetry workflow usually applied in 2D photogrammetry studies. Results from a preliminary study show a decrease of 45% in processing time when using the software developed in the scope of this work in comparison with a competing methodology. Existing limitations and future work towards improved versions of the software are discussed.Fotogrametria em câmera única é um procedimento bem estabelecido para recolher dados quantitativos de objectos através de fotografias. Em biologia, fotogrametria é frequentemente aplicada no contexto de estudos morfométricos, focando-se no estudo comparativo de formas e organismos. Nos estudos morfométricos são utilizados dois tipos de aplicação fotogramétrica: fotogrametria 2D, onde são utilizadas medidas de distância e ângulo para quantitativamente descrever atributos de um objecto, e fotogrametria 3D, onde são utilizadas coordenadas de referência de forma a reconstruir a verdadeira forma de um objeto. Apesar da existência de uma elevada variedade de software no contexto de fotogrametria 3D, a variedade de software concebida especificamente para a a aplicação de fotogrametria 2D é ainda muito reduzida. Consequentemente, é comum observar estudos onde fotogrametria 2D é utilizada através da aquisição manual de medidas a partir de imagens, que posteriormente necessitam de ser escaladas para um sistema apropriado de medida. Este processo de várias etapas é frequentemente moroso e requer a aplicação de diferentes programas de software. Além de ser moroso, é também susceptível a erros, dada a natureza manual na aquisição de dados. O presente trabalho visou abordar os problemas descritos através da implementação de um novo software multiplataforma capaz de integrar e agilizar o processo de fotogrametria presentes em estudos que requerem fotogrametria 2D. Resultados preliminares demonstram um decréscimo de 45% em tempo de processamento na utilização do software desenvolvido no âmbito deste trabalho quando comparado a uma metodologia concorrente. Limitações existentes e trabalho futuro são discutidos

    On the popularization of digital close-range photogrammetry: a handbook for new users.

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική

    Image Based Modeling from Spherical Photogrammetry and Structure for Motion. The Case of the Treasury, Nabatean Architecture in Petra

    Get PDF
    This research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. Photomodeling is an already tested interactive approach to produce a detailed and quick 3D model reconstruction. Photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. Lastly the model can be enhanced checking the coincidence between the surface and the projected texture. The challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry (Fangi, 2007,2008,2009,2010) and structure for motion (Photosynth web service and Bundler + CMVS2 + PMVS2). The input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for Photosynth web service. The performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. In Photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. The proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize Photosynth point clouds and orientation camera parameters. All of them are developed in VVVV programming environment. 3DStudio Max environment is then chosen because of its performance in terms of interactive modeling, UV mapping parameters handling and real time visualization of projected texture on the model surface. Experimental results show how is possible to obtain a 3D photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. Moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. In the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (Spherical Photogrammetry and Structure for Motion)

    Reconstruction of the pose of uncalibrated cameras via user-generated videos

    Get PDF
    Extraction of 3D geometry from hand-held unsteady uncalibrated cameras faces multiple difficulties: finding usable frames, feature-matching and unknown variable focal length to name three. We have built a prototype system to allow a user to spatially navigate playback viewpoints of an event of interest, using geometry automatically recovered from casually captured videos. The system, whose workings we present in this paper, necessarily estimates not only scene geometry, but also relative viewpoint position, overcoming the mentioned difficulties in the process. The only inputs required are video sequences from various viewpoints of a common scene, as are readily available online from sporting and music events. Our methods make no assumption of the synchronization of the input and do not require file metadata, instead exploiting the video to self-calibrate. The footage need only contain some camera rotation with little translation—for hand-held event footage a likely occurrence.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1145/2659021.265902

    EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata

    Full text link
    We learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.Comment: Project link: http://hellomuffin.github.io/exif-as-languag

    Cost-effective non-metric photogrammetry from consumer-grade sUAS: implications for direct georeferencing of structure from motion photogrammetry

    Get PDF
    The declining costs of small Unmanned Aerial Systems (sUAS), in combination with Structure-from-Motion (SfM) photogrammetry have triggered renewed interest in image-based topography reconstruction. However, the potential uptake of sUAS-based topography is limited by the need for ground control acquired with expensive survey equipment. Direct georeferencing (DG) is a workflow that obviates ground control and uses only the camera positions to georeference the SfM results. However, the absence of ground control poses significant challenges in terms of the data quality of the final geospatial outputs. Notably, it is generally accepted that ground control is required to georeference, refine the camera calibration parameters, and remove any artefacts of optical distortion from the topographic model. Here, we present an examination of DG carried out with low-cost consumer-grade sUAS. We begin with a study of surface deformations resulting from systematic perturbations of the radial lens distortion parameters. We then test a number of flight patterns and develop a novel error quantification method to assess the outcomes. Our perturbation analysis shows that there exists families of predictable equifinal solutions of K1-K2 which minimize doming in the output model. The equifinal solutions can be expressed as K2 = f (K1) and they have been observed for both the DJI Inspire 1 and Phantom 3 sUAS platforms. This equifinality relationship can be used as an external reliability check of the self-calibration and allow a DG workflow to produce topography exempt of non-affine deformations and with random errors of 0.1% of the flying height, linear offsets below 10 m and off-vertical tilts below 1°. Whilst not yet of survey-grade quality, these results demonstrate that low-cost sUAS are capable of producing reliable topography products without recourse to expensive survey equipment and we argue that direct georeferencing and low-cost sUAS could transform survey practices in both academic and commercial disciplines
    corecore