791 research outputs found

    An Analysis of the Radiometric Quality of Small Unmanned Aircraft System Imagery

    Get PDF
    In recent years, significant advancements have been made in both sensor technology and small Unmanned Aircraft Systems (sUAS). Improved sensor technology has provided users with cheaper, lighter, and higher resolution imaging tools, while new sUAS platforms have become cheaper, more stable and easier to navigate both manually and programmatically. These enhancements have provided remote sensing solutions for both commercial and research applications that were previously unachievable. However, this has provided non-scientific practitioners with access to technology and techniques previously only available to remote sensing professionals, sometimes leading to improper diagnoses and results. The work accomplished in this dissertation demonstrates the impact of proper calibration and reflectance correction on the radiometric quality of sUAS imagery. The first part of this research conducts an in-depth investigation into a proposed technique for radiance-to-reflectance conversion. Previous techniques utilized reflectance conversion panels in-scene, which, while providing accurate results, required extensive time in the field to position the panels as well as measure them. We have positioned sensors on board the sUAS to record the downwelling irradiance which then can be used to produce reflectance imagery without the use of these reflectance conversion panels. The second part of this research characterizes and calibrates a MicaSense RedEdge-3, a multispectral imaging sensor. This particular sensor comes pre-loaded with metadata values, which are never recalibrated, for dark level bias, vignette and row-gradient correction and radiometric calibration. This characterization and calibration studies were accomplished to demonstrate the importance of recalibration of any sensors over a period of time. In addition, an error propagation was performed to detect the highest contributors of error in the production of radiance and reflectance imagery. Finally, a study of the inherent reflectance variability of vegetation was performed. In other words, this study attempts to determine how accurate the digital count to radiance calibration and the radiance to reflectance conversion has to be. Can we lower our accuracy standards for radiance and reflectance imagery, because the target itself is too variable to measure? For this study, six Coneflower plants were analyzed, as a surrogate for other cash crops, under different illumination conditions, at different times of the day, and at different ground sample distances (GSDs)

    OSPC: Online Sequential Photometric Calibration

    Full text link
    Photometric calibration is essential to many computer vision applications. One of its key benefits is enhancing the performance of Visual SLAM, especially when it depends on a direct method for tracking, such as the standard KLT algorithm. Another advantage could be in retrieving the sensor irradiance values from measured intensities, as a pre-processing step for some vision algorithms, such as shape-from-shading. Current photometric calibration systems rely on a joint optimization problem and encounter an ambiguity in the estimates, which can only be resolved using ground truth information. We propose a novel method that solves for photometric parameters using a sequential estimation approach. Our proposed method achieves high accuracy in estimating all parameters; furthermore, the formulations are linear and convex, which makes the solution fast and suitable for online applications. Experiments on a Visual Odometry system validate the proposed method and demonstrate its advantages

    Real-time multi-image vignetting and exposure correction for image stitching

    Get PDF
    Seamless image stitching depends not only on the accurate alignments of camera images, but also on the compensation of illumination inconsistencies. Even if two images are aligned perfectly, the seam is still visible if the images have a distinct vignetting or different exposure. Image stitching is used to expand the field of view, but a visible seam can lead to significant errors in subsequent visual perception tasks. As a result, we present a straightforward and accurate method for vignetting and exposure correction for stitched images. Firstly, we estimate the camera response function that maps irradiance to intensity. Then, the vignetting model is determined, which is applied to the irradiance images. After that, the exposure of the stitched images is corrected with the irradiance values at the seam. Finally, the irradiance is converted back into intensity using the camera response function. Our approach is evaluated using data recorded by our experimental vehicle and the public nuScenes dataset. Thereby, we test the performance of our method using the IoU of the histograms as well as the mean absolute error of the intensity values in the overlapping image regions. Further more, we demonstrate the real-time capability of our approach

    Are we ready for beyond-application high-volume data? The Reeds robot perception benchmark dataset

    Get PDF
    This paper presents a dataset, called Reeds, for research on robot perception algorithms. The dataset aims to provide demanding benchmark opportunities for algorithms, rather than providing an environment for testing application-specific solutions. A boat was selected as a logging platform in order to provide highly dynamic kinematics. The sensor package includes six high-performance vision sensors, two long-range lidars, radar, as well as GNSS and an IMU. The spatiotemporal resolution of sensors were maximized in order to provide large variations and flexibility in the data, offering evaluation at a large number of different resolution presets based on the resolution found in other datasets. Reeds also provides means of a fair and reproducible comparison of algorithms, by running all evaluations on a common server backend. As the dataset contains massive-scale data, the evaluation principle also serves as a way to avoid moving data unnecessarily.It was also found that naive evaluation of algorithms, where each evaluation is computed sequentially, was not practical as the fetch and decode task of each frame would not scale well. Instead, each frame is only decoded once and then fed to all algorithms in parallel, including for GPU-based algorithms
    • …
    corecore