41 research outputs found

    Book review: Structure from motion in the geosciences

    Get PDF
    Book Review of Structure from Motion in the Geosciences by Jonathan Carravick, Mark Smith and Duncan Quincey. Chichester, UK: Wiley-Blackwell, 2016. ISBN 9781466566477

    VGC 2016: second virtual geoscience conference

    Get PDF
    VGC 2016: second virtual geoscience conferenc

    A convergent image configuration for DEM extraction that minimises the systematic effects caused by an inaccurate lens model

    Get PDF
    The internal geometry of consumer grade digital cameras is generally considered unstable. Research conducted recently at Loughborough University indicated the potential of these sensors to maintain their internal geometry. It also identified residual systematic error surfaces or “domes”, discernible in digital elevation models (DEMs) (Wackrow et al., 2007), caused by slightly inaccurate estimated lens distortion parameters. This paper investigates these systematic error surfaces and establishes a methodology to minimise them. Initially, simulated data were used to ascertain the effect of changing the interior orientation parameters on extracted DEMs, specifically the lens model. Presented results demonstrate the relationship between “domes” and inaccurately specified lens distortion parameters. The stereopair remains important for data extraction in photogrammetry, often using automated DEM extraction software. The photogrammetric normal case is widely used, in which the camera base is parallel to the object plane and the optical axes of the cameras intersect the object plane orthogonally. During simulation, the error surfaces derived from extracted DEMs using the normal case, were compared with error surfaces created using a mildly convergent geometry. In contrast to the normal case, the optical camera axes intersect the object plane at the same point. Results of the simulation process clearly demonstrate that a mildly convergent camera configuration eradicates the systematic error surfaces. This result was confirmed through practical tests and demonstrates that mildly convergent imagery effectively improves the accuracies of DEMs derived with this class of sensor

    Structure from motion (SFM) photogrammetry vs terrestrial laser scanning

    Get PDF
    Structure from Motion (SfM) has its roots in the well-established spatial measurement method of photogrammetry, but is becoming increasingly recognised as a means to capture dense 3D data to represent real-world objects, both natural and man- made. This capability has conventionally been the domain of the terrestrial laser scanner (TLS), a mature and easy to understand method used to generate millions of 3D point coordinates in a form known as a “point cloud”. Each technique is described and noted for its strengths and weaknesses

    Automatic isolation of blurred images from UAV image sequences

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated filtering process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. A “shaking table” was used to create images with known blur during a series of laboratory tests. This platform can be moved in one direction by a mathematical function controlled by a defined frequency and amplitude. The shaking table was used to displace a Nikon D80 digital SLR camera with a user defined frequency and amplitude. The actual camera displacement was measured accurately and exposures were synchronized, which provided the opportunity to acquire images with a known blur effect. Acquired images were processed digitally to determine a quantifiable measure of image blur, which has been created by the actual shaking table function. Once determined for a sequence of images, a user defined threshold can be used to differentiate between “blurred” and "acceptable" images. A subsequent step is to establish the effect that blurred images have upon the accuracy of subsequent measurements. Both of these aspects will be discussed in this paper and future work identified

    A multi consumer-grade fixed camera set-up with poorly determined camera geometry for precise change detection [abstract]

    Get PDF
    A multi consumer-grade fixed camera set-up with poorly determined camera geometry for precise change detection [abstract

    Monitoring dynamic structural tests using image deblurring techniques

    Get PDF
    Photogrammetric techniques have demonstrated their suitability for monitoring static structural tests. Advantages include scalability, reduced cost, and three dimensional monitoring of very high numbers of points without direct contact with the test element. Commercial measuring instruments now exist which use this approach. Dynamic testing is becoming a convenient approach for long-term structural health monitoring. If image based methods could be applied to the dynamic case, then the above advantages could prove beneficial. Past work has been successful where the vibration has either large amplitude or low frequency, as even specialist imaging sensors are limited by an inherent compromise between image resolution and imaging frequency. Judgement in sensor selection is therefore critical. Monitoring of structures in real-time is possible only at a reduced resolution, and although imaging and computer processing hardware continuously improves, so the accuracy demands of researchers and engineers increase. A new approach to measuring the vibration envelope is introduced here, whereby a long-exposure photograph is used to capture a blurred image of the vibrating structure. The high resolution blurred image showing the whole vibration interval is measured with no need for high-speed imaging. Results are presented for a series of small-scale laboratory models, as well as a larger scale test, which demonstrate the flexibility of the proposed technique. Different image processing strategies are presented and compared, as well as the effects of exposure, aperture and sensitivity selection. Image processing time appears much faster, increasing suitability for real-time monitoring

    Monitoring 3D vibrations in structures using high resolution blurred imagery

    Get PDF
    Photogrammetry has been used in the past to monitor the laboratory testing of civil engineering structures using multiple image based sensors. This has been successful, but detecting vibrations during dynamic structural tests has proved more challenging. Detecting vibrations during dynamic structural tests usually depend on high speed cameras, but these sensors often result in lower image resolutions and reduced accuracy. To overcome this limitation, a novel approach described in this paper has been devised to take measurements from blurred images in long-exposure photos. The motion of the structure is captured in individual motion-blurred image, without dependence on imaging speed. A bespoke algorithm then determines each measurement point’s motion. Using photogrammetric techniques, a model structure’s motion with respect to different excitation frequencies is captured and its vibration envelope recreated in 3D. The approach is tested and used to identify changes in the model’s vibration response

    UAV image blur – its influence and ways to correct it

    Get PDF
    Unmanned aerial vehicles (UAVs) have become an interesting and active research topic in photogrammetry. Current research is based on image sequences acquired by UAVs which have a high ground resolution and good spectral resolution due to low flight altitudes combined with a high-resolution camera. One of the main problems preventing full automation of data processing of UAV imagery is the unknown degradation effect of blur caused by camera movement during image acquisition. The purpose of this paper is to analyse the influence of blur on photogrammetric image processing, the correction of blur and finally, the use of corrected images for coordinate measurements. It was found that blur influences image processing significantly and even prevents automatic photogrammetric analysis, hence the desire to exclude blurred images from the sequence using a novel filtering technique. If necessary, essential blurred images can be restored using information of overlapping images of the sequence or a blur kernel with the developed edge shifting technique. The corrected images can be then used for target identification, measurements and automated photogrammetric processing

    Automatic detection of blurred images in UAV image sets

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm
    corecore