33 research outputs found
Spatial measurement with consumer grade digital cameras
A Doctoral Thesis Submitted for the Degree of Doctor of Philosophy.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
A convergent image configuration for DEM extraction that minimises the systematic effects caused by an inaccurate lens model
The internal geometry of consumer grade digital cameras is
generally considered unstable. Research conducted recently at
Loughborough University indicated the potential of these sensors to
maintain their internal geometry. It also identified residual systematic
error surfaces or “domes”, discernible in digital elevation models
(DEMs) (Wackrow et al., 2007), caused by slightly inaccurate estimated
lens distortion parameters. This paper investigates these systematic
error surfaces and establishes a methodology to minimise them.
Initially, simulated data were used to ascertain the effect of changing
the interior orientation parameters on extracted DEMs, specifically the
lens model. Presented results demonstrate the relationship between
“domes” and inaccurately specified lens distortion parameters. The
stereopair remains important for data extraction in photogrammetry,
often using automated DEM extraction software. The photogrammetric
normal case is widely used, in which the camera base is parallel to the
object plane and the optical axes of the cameras intersect the object
plane orthogonally. During simulation, the error surfaces derived from
extracted DEMs using the normal case, were compared with error
surfaces created using a mildly convergent geometry. In contrast to the
normal case, the optical camera axes intersect the object plane at the
same point. Results of the simulation process clearly demonstrate that a
mildly convergent camera configuration eradicates the systematic error
surfaces. This result was confirmed through practical tests and
demonstrates that mildly convergent imagery effectively improves the
accuracies of DEMs derived with this class of sensor
Minimising systematic error surfaces in digital elevation models using oblique convergent imagery
There are increasing opportunities to use consumer-grade digital cameras, particularly
if accurate spatial data can be captured. Research recently conducted at
Loughborough University identified residual systematic error surfaces or domes discernible
in digital elevation models (DEMs). These systematic effects are often associated
with such cameras and are caused by slightly inaccurate estimated lens
distortion parameters. A methodology that minimises the systematic error surfaces
was therefore developed, using a mildly convergent image configuration in a vertical
perspective. This methodology was tested through simulation and a series of practical
tests. This paper investigates the potential of the convergent configuration to minimise
the error surfaces, even if the geometrically more complex oblique perspective is used.
Initially, simulated data was used to demonstrate that an oblique convergent image
configuration can minimise remaining systematic error surfaces using various imaging
angles. Additionally, practical tests using a laboratory testfield were conducted to
verify results of the simulation. The need to develop a system to measure the topographic
surface of a flooding river provided the opportunity to verify the findings of
the simulation and laboratory test using real data. Results of the simulation process,
the laboratory test and the practical test are reported in this paper and demonstrate
that an oblique convergent image configuration eradicates the systematic error surfaces
which result from inaccurate lens distortion parameters. This approach is significant
because by removing the need for an accurate lens model it effectively improves
the accuracies of digital surface representations derived using consumer-grade digital
cameras. Carefully selected image configurations could therefore provide new opportunities
for improving the quality of photogrammetrically acquired data
Cultural Heritage Recording Utilising Low-Cost Closerange Photogrammetry
This paper was presented at the CIPA 23rd International Symposium, 12 – 16 September 2011, Prague, Czech Republic:http://www.conferencepartners.cz/cipa/Cultural heritage is under a constant threat of damage or even destruction and comprehensive
and accurate recording is necessary to attenuate the risk of losing heritage or serve as basis for
reconstruction. Cost effective and easy to use methods are required to record cultural heritage, particularly
during a world recession, and close-range photogrammetry has proven potential in this area. Off-the-shelf
digital cameras can be used to rapidly acquire data at low cost, allowing non-experts to become involved.
Exterior orientation of the camera during exposure ideally needs to be established for every image,
traditionally requiring known coordinated target points. Establishing these points is time consuming and
costly and using targets can be often undesirable on sensitive sites. MEMS-based sensors can assist in
overcoming this problem by providing small-size and low-cost means to directly determine exterior
orientation for close-range photogrammetry. This paper describes development of an image-based
recording system, comprising an off-the-shelf digital SLR camera, a MEMS-based 3D orientation sensor and
a GPS antenna. All system components were assembled in a compact and rigid frame that allows calibration
of rotational and positional offsets between the components. The project involves collaboration between
English Heritage and Loughborough University and the intention is to assess the system’s achievable
accuracy and practicability in a heritage recording environment. Tests were conducted at Loughborough
University and a case study at St. Catherine’s Oratory on the Isle of Wight, UK. These demonstrate that the
data recorded by the system can indeed meet the accuracy requirements for heritage recording at medium
accuracy (1-4cm), with either a single or even no control points. As the recording system has been
configured with a focus on low-cost and easy-to-use components, it is believed to be suitable for heritage
recording by non-specialists. This offers the opportunity for lay people to become more involved in their
local heritage, an important aspiration identified by English Heritage. Recently, mobile phones
(smartphones) with integrated camera and MEMS-based orientation and positioning sensors have become
available. When orientation and position during camera exposure is extracted, these phones establish offthe-
shelf systems that can facilitate image-based recording with direct exterior orientation determination.
Due to their small size and low-cost they have potential to further enhance the involvement of lay-people in
heritage recording. The accuracy currently achievable will be presented also
Influence of blur on feature matching and a geometric approach for photogrammetric deblurring
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by a UAV, which have a high ground resolution and good spectral and radiometric resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost efficient and have become attractive for many applications including change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The aim of this research is to develop a blur correction method to deblur UAV images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears "muddy" and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process, which is advantageous. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images may limit the application. Deblurring images needs to focus on geometric correct deblurring to assure geometric correct measurements
Automatic isolation of blurred images from UAV image sequences
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated filtering process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. A “shaking table” was used to create images with known blur during a series of laboratory tests. This platform can be moved in one direction by a mathematical function controlled by a defined frequency and amplitude. The shaking table was used to displace a Nikon D80 digital SLR camera with a user defined frequency and amplitude. The actual camera displacement was measured accurately and exposures were synchronized, which provided the opportunity to acquire images with a known blur effect. Acquired images were processed digitally to determine a quantifiable measure of image blur, which has been created by the actual shaking table function. Once determined for a sequence of images, a user defined threshold can be used to differentiate between “blurred” and "acceptable" images. A subsequent step is to establish the effect that blurred images have upon the accuracy of subsequent measurements. Both of these aspects will be discussed in this paper and future work identified
UAV image blur – its influence and ways to correct it
Unmanned aerial vehicles (UAVs) have become an interesting and active research topic in photogrammetry. Current research is based on image sequences acquired by UAVs which have a high ground resolution and good spectral resolution due to low flight altitudes combined with a high-resolution camera. One of the main problems preventing full automation of data processing of UAV imagery is the unknown degradation effect of blur caused by camera movement during image acquisition.
The purpose of this paper is to analyse the influence of blur on photogrammetric image processing, the correction of blur and finally, the use of corrected images for coordinate measurements. It was found that blur influences image processing significantly and even prevents automatic photogrammetric analysis, hence the desire to exclude blurred images from the sequence using a novel filtering technique. If necessary, essential blurred images can be restored using information of overlapping images of the sequence or a blur kernel with the developed edge shifting technique. The corrected images can be then used for target identification, measurements and automated photogrammetric processing
Automatic detection of blurred images in UAV image sets
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas.
One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs.
This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the
operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is
blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference
standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other
SIEDS values from the same dataset.
The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for
UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range
images, which significantly extends the field of application for the algorithm
Geometric consistency and stability of consumer-grade digital cameras for accurate spatial measurement
It is known that uncertain internal geometry of consumer-grade
digital cameras limits the accuracy of data that can be extracted. These
cameras can be calibrated, but the validity of calibration data over a
period of time should be carefully assessed before subsequent
photogrammetric measurement. This paper examines the geometric
stability and manufacturing consistency of a typical low-cost digital
camera (Nikon Coolpix 5400) by estimating the degree of similarity
between interior orientation parameters (IOP), established over a oneyear
period. Digital elevation models (DEMs) are extracted with
differing interior orientation parameters (IOP) sets and accuracies are
compared using data obtained from seven identical cameras. An
independent self-calibrating bundle adjustment (GAP) and the Leica
Photogrammetry Suite (LPS) software were used to provide these datasets.
Results are presented that indicate the potential of these cameras to
maintain their internal geometry in terms of temporal stability and
manufacturing consistency. This study also identifies residual systematic
error surfaces or “domes”, discernible in “DEMs of difference”. These
are caused by slightly inaccurately estimated lens distortion parameters,
which effectively constrain the accuracies achievable with this class of
sensor
Parameterising internal camera geometry with focusing distance
A study on the variation of internal camera geometry (principal distance, principal point position and lens distortion parameters) with different focus distances has been conducted. Results demonstrate that variations of parameters are continuous and predictable, allowing a new way to describe internal camera geometry. The classical constant parameters, c, x p , y p , K 1 , K 2 , P 1 and P 2 , are replaced by continuous functions, c(γ), x p (γ), y p (γ), K 1 (γ), K 2 (γ), P 1 (γ) and P 2 (γ), where γ is a variable describing the focus position. Incorporation of γ as a metadata tag (for example, Exif header) of a photograph jointly with a parameterised definition of camera geometry would allow full use of the autofocus camera function; enabling maximum effective depth of field, better match of the plane of focus with the object’s position and higher reliability. Additionally, conducted tests suggest the parameterised definition of internal geometry could help to locate and correct linear dependences between adjusted parameters, potentially improving the precision and accuracy of calibration