1,607 research outputs found

    Empirical Analysis of Aerial Camera Filters for Shoreline Mapping

    Get PDF
    Accurate, up-to-date national shoreline is critical in defining the territorial limits of the Unites States, updating nautical charts, and managing coastal resources. The National Oceanic and Atmospheric Administration (NOAA) delineates the interpreted shoreline photogrammetrically using tide-coordinated stereo photography acquired with black-and-white infrared emulsion. In this paper, we present the results of a two-phased study aimed at quantifying the effect of camera filter selection on the interpreted shoreline when utilizing this method of shoreline mapping

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05∘^{\circ} and 0.18 m / 2.39∘^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201

    Non-parametric Methods for Automatic Exposure Control, Radiometric Calibration and Dynamic Range Compression

    Get PDF
    Imaging systems are essential to a wide range of modern day applications. With the continuous advancement in imaging systems, there is an on-going need to adapt and improve the imaging pipeline running inside the imaging systems. In this thesis, methods are presented to improve the imaging pipeline of digital cameras. Here we present three methods to improve important phases of the imaging process, which are (i) ``Automatic exposure adjustment'' (ii) ``Radiometric calibration'' (iii) ''High dynamic range compression''. These contributions touch the initial, intermediate and final stages of imaging pipeline of digital cameras. For exposure control, we propose two methods. The first makes use of CCD-based equations to formulate the exposure control problem. To estimate the exposure time, an initial image was acquired for each wavelength channel to which contrast adjustment techniques were applied. This helps to recover a reference cumulative distribution function of image brightness at each channel. The second method proposed for automatic exposure control is an iterative method applicable for a broad range of imaging systems. It uses spectral sensitivity functions such as the photopic response functions for the generation of a spectral power image of the captured scene. A target image is then generated using the spectral power image by applying histogram equalization. The exposure time is hence calculated iteratively by minimizing the squared difference between target and the current spectral power image. Here we further analyze the method by performing its stability and controllability analysis using a state space representation used in control theory. The applicability of the proposed method for exposure time calculation was shown on real world scenes using cameras with varying architectures. Radiometric calibration is the estimate of the non-linear mapping of the input radiance map to the output brightness values. The radiometric mapping is represented by the camera response function with which the radiance map of the scene is estimated. Our radiometric calibration method employs an L1 cost function by taking advantage of Weisfeld optimization scheme. The proposed calibration works with multiple input images of the scene with varying exposure. It can also perform calibration using a single input with few constraints. The proposed method outperforms, quantitatively and qualitatively, various alternative methods found in the literature of radiometric calibration. Finally, to realistically represent the estimated radiance maps on low dynamic range display (LDR) devices, we propose a method for dynamic range compression. Radiance maps generally have higher dynamic range (HDR) as compared to the widely used display devices. Thus, for display purposes, dynamic range compression is required on HDR images. Our proposed method generates few LDR images from the HDR radiance map by clipping its values at different exposures. Using contrast information of each LDR image generated, the method uses an energy minimization approach to estimate the probability map of each LDR image. These probability maps are then used as label set to form final compressed dynamic range image for the display device. The results of our method were compared qualitatively and quantitatively with those produced by widely cited and professionally used methods

    A new method to determine multi-angular reflectance factor from lightweight multispectral cameras with sky sensor in a target-less workflow applicable to UAV

    Full text link
    A new physically based method to estimate hemispheric-directional reflectance factor (HDRF) from lightweight multispectral cameras that have a downwelling irradiance sensor is presented. It combines radiometry with photogrammetric computer vision to derive geometrically and radiometrically accurate data purely from the images, without requiring reflectance targets or any other additional information apart from the imagery. The sky sensor orientation is initially computed using photogrammetric computer vision and revised with a non-linear regression comprising radiometric and photogrammetry-derived information. It works for both clear sky and overcast conditions. A ground-based test acquisition of a Spectralon target observed from different viewing directions and with different sun positions using a typical multispectral sensor configuration for clear sky and overcast showed that both the overall value and the directionality of the reflectance factor as reported in the literature were well retrieved. An RMSE of 3% for clear sky and up to 5% for overcast sky was observed

    MEASURING PHOTOGRAMMETRIC CONTROL TARGETS IN LOW CONTRAST IMAGES

    Get PDF
    This paper presents an experimental assessment of photogrammetric targets and subpixel location techniques to be used with low contrast images such as images acquired by hyperspectral frame cameras. Eight different target patterns of varying shape, background, and size were tested. The aim was to identify an optimum distinctive pattern to serve as control point in aerial surveys of small areas using hyperspectral cameras when natural points are difficult to find in suitable areas. Three automatic techniques to identify the target point of interest were compared, which were weighted centroid, template matching, and line intersection. For assessment, hyperspectral images of the set of targets were collected in an outdoor 3D terrestrial calibration field. RGB images were also acquired for reference and comparison. Experiments were conducted to assess the accuracy at the sub-pixel level. Bundle adjustment with several images was used, and vertical and horizontal distances were directly measured in the field for verification. An experiment with aerial flight was also performed to validate the chosen target. The analysis of residuals and discrepancies indicated that a circular target is best suited as the ground control in aerial surveys, considering the condition in which the target appears with few pixels in the image

    Target recognitions in multiple camera CCTV using colour constancy

    Get PDF
    People tracking using colour feature in crowded scene through CCTV network have been a popular and at the same time a very difficult topic in computer vision. It is mainly because of the difficulty for the acquisition of intrinsic signatures of targets from a single view of the scene. Many factors, such as variable illumination conditions and viewing angles, will induce illusive modification of intrinsic signatures of targets. The objective of this paper is to verify if colour constancy (CC) approach really helps people tracking in CCTV network system. We have testified a number of CC algorithms together with various colour descriptors, to assess the efficiencies of people recognitions from real multi-camera i-LIDS data set via Receiver Operating Characteristics (ROC). It is found that when CC is applied together with some form of colour restoration mechanisms such as colour transfer, the recognition performance can be improved by at least a factor of two. An elementary luminance based CC coupled with a pixel based colour transfer algorithm, together with experimental results are reported in the present paper

    Real-time and post-processed georeferencing for hyperpspectral drone remote sensing

    Get PDF
    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites. © Authors 2018. CC BY 4.0 License.Peer reviewe
    • 

    corecore