2,242 research outputs found

    Design and application of an automated system for camera photogrammetric calibration

    Get PDF
    This work presents the development of a novel Automatic Photogrammetric Camera Calibration System (APCCS) that is capable of calibrating cameras, regardless of their Field of View (FOV), resolution and sensitivity spectrum. Such calibrated cameras can, despite lens distortion, accurately determine vectors in a desired reference frame for any image coordinate, and map points in the reference frame to their corresponding image coordinates. The proposed system is based on a robotic arm which presents an interchangeable light source to the camera in a sequence of known discrete poses. A computer captures the camera's image for each robot pose and locates the light source centre in the image for each point in the sequence. Careful selection of the robot poses allows cost functions dependant on the captured poses and light source centres to be formulated for each of the desired calibration parameters. These parameters are the Brown model parameters to convert from the distorted to the undistorted image (and vice versa), the focal length, and the camera's pose. The pose is split into the camera pose relative to its mount and the mount's pose relative to the reference frame to aid subsequent camera replacement. The parameters that minimise each cost function are deter- mined via a combination of coarse global and fine local optimisation techniques: genetic algorithms and the Leapfrog algorithm, respectively. The real world applicability of the APCCS is assessed by photogrammetrically stitching cameras of differing resolutions, FOVs and spectra into a single multi- spectral panorama. The quality of these panoramas are deemed acceptable after both subjective and quantitative analyses. The quantitative analysis compares the stitched position of matched image feature pairs found with the Shape Invariant Feature Tracker (SIFT) and Speeded Up Robust Features (SURF) algorithms and shows the stitching to be accurate to within 0.3°. The noise sensitivity of the APCCS is assessed via the generation of synthetic light source centres and robot poses. The data is realistically created for a hy- pothetical camera pair via the corruption of ideal data using seven noise sources emulating the robot movement, camera mounting and image processing errors. The calibration and resulting stitching accuracies are shown to be largely independent of the noise magnitudes in the operational ranges tested. The APCCS is thus found to be robust to noise. The APCCS is shown to meet all its requirements by determining a novel combination of calibration parameters for cameras regardless of their properties in a noise resilient manner

    Valoración de la calidad de imágenes panorámicas esféricas

    Full text link
    [EN] In recent years, the production of panoramic images has been boosted by the increasing use of digital photographiccameras and mobile phones. However, for highly demanding applications such as long-range deformation monitoring, theaccuracy and quality control of panoramic images and processes used to obtain accurate 3D models should be properlyassessed. Therefore, prior to being applied in real projects, the quality of the spherical panoramic images generated bythree widely used computer programs (Agisoft Metashape, GigaPan Stitch and PTGui) is evaluated using the same imagesof a photogrammetric laboratory full of control points and an outdoor environment by shooting from several stations. Inaddition to the assessment of the geometrical accuracy, the study also includes important aspects for practical efficiencysuch as workflow, speed of processing, user-friendliness, or exporting products and formats available. The results of thecomparisons show that Agisoft Metashape meets the required geometric specifications with higher quality and has clearadvantages in performance if compared to the other two tested programs.[ES] En los últimos años, la producción de imágenes panorámicas se ha visto impulsada por el uso cada vez mayor de cámaras fotográficas digitales y teléfonos móviles. Sin embargo, deben evaluarse adecuadamente en aplicaciones altamente exigentes como la monitorización de deformaciones a grandes distancias, la precisión y el control de calidad de las imágenes panorámicas y los procesos utilizados para obtener modelos 3D precisos. Por consiguiente, antes de ser aplicadas en proyectos reales, se evalúa la calidad de las imágenes panorámicas esféricas generadas por tres programas informáticos ampliamente utilizados (Agisoft Metashape, GigaPan Stitch y PTGui) utilizando las mismas imágenes de un laboratorio fotogramétrico lleno de puntos de apoyo y del exterior desde varias estaciones. Además de la evaluación de la precisión geométrica, el estudio también incluye aspectos importantes para la eficiencia práctica como es el flujo de trabajo, la velocidad de procesamiento, la facilidad de uso o la exportación de productos y los formatos disponibles. Los resultados de las comparaciones muestran que Agisoft Metashape cumple con las especificaciones geométricas requeridas con mayor calidad y tiene claras ventajas de rendimiento si se compara con los otros dos programas testeados.Javadi, P.; Lerma, J.; García-Asenjo, L.; Garrigues, P. (2021). Quality assessment of spherical panoramic images. En Proceedings 3rd Congress in Geomatics Engineering. Editorial Universitat Politècnica de València. 7-14. https://doi.org/10.4995/CiGeo2021.2021.12728OCS71

    A Stronger Stitching Algorithm for Fisheye Images based on Deblurring and Registration

    Full text link
    Fisheye lens, which is suitable for panoramic imaging, has the prominent advantage of a large field of view and low cost. However, the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching. Aiming to resolve this drawback, we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning. In the stage of fisheye image correction, we propose the Attention-based Nonlinear Activation Free Network (ANAFNet) to deblur fisheye images corrected by Zhang calibration method. Specifically, ANAFNet adopts the classical single-stage U-shaped architecture based on convolutional neural networks with soft-attention technique and it can restore a sharp image from a blurred image effectively. In the part of image registration, we propose the ORB-FREAK-GMS (OFG), a comprehensive image matching algorithm, to improve the accuracy of image registration. Experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method.Comment: 6 pages, 5 figure

    Single camera photogrammetry for reverse engineering and fabrication of ancient and modern artifacts

    Get PDF
    Photogrammetry has been used for recording objects for well over one hundred and fifty years. Modern photogrammetry, or digital image capture, can be used with the aid of a single medium range digital single lens reflex (DSLR) camera, to transform two-dimensional images into three-dimensional CAD spatial representations, and together with the use of additive manufacturing or 3D Printing technology, geometric representations of original cultural, historic and geological artifacts can be fabricated in a process known as Reverse Engineering. Being able to replicate such objects is of great benefit in education; if the original object cannot be handled because it is too old or delicate, then replicas can give the handler a chance to experience the size, texture and weight of rare objects. Photogrammetry equipment is discussed, the objective being simplicity of execution for eventual realisation of physical products such as the artifacts discussed. As the processing power of computers has increased and become more widely available, and with the use of computer software programs it is now possible to digitally combine multi-view photographs, taken from 360° around the object, into 3D CAD representational virtual images. The resulting Data is then reprocessed, with a secondary computer program, to produce the STL file that the additive manufacturing machines can read, so as to produce replicated models of the originals. Three case studies are documented: the reproduction of a small modern clay sculpture; a 3000-year-old Egyptian artifact; and an Ammonite fossil, all successfully recreated, using additive manufacturing technology

    Real-time Panorama Stitching using a Single PTZ-Camera without using Image Feature Matching

    Get PDF
    In surveillance applications one thing to consider is how much of a scene one can cover with a camera. One way to augment this is to take images with overlap and blend them, creating a new image with bigger field of view and thereby increase the scene coverage. In this thesis work we have been looking at how one can create panorama images with a pan-tilt-camera and how fast it can be done. We chose a circular panorama representation for this. Our approach was that gathering enough metadata from the camera one can rectify the gathered images and blend them without matching feature-points or other computationally heavy operations. We show that this can be done. The images gathered was corrected for lens distortions and rolling shutter effects arising from rotating the camera. Attempts where made to find an optimal path for the camera to follow while capturing images. An algorithm to do intensity corrections of the images was also implemented. We find that one can rotate the camera at high speeds and still produce a good quality panorama image. The limiting factors are the precision of the meta data gathered, like motion data from the on-board gyro, and the lighting conditions, since a short shutter time is required to minimize motion blur. The quality varies depending on the time taken to capture the images needed to create the spherical projection. The fastest run was done in 1.6 seconds with some distortions. A run in around 4 seconds generally produce a good quality panorama image
    • …
    corecore