One aspect of the recent paradigm shift in geospatial information sciences is that data acquisition and processing systems have moved away from the single sensor-based model to advanced integrated multisensory systems, which typically include imaging and navigation sensors. Active sensors, such as Light Detection and Ranging (LiDAR) and Interferometric Synthetic Aperture Radar (IFSAR), are routinely used with conventional optical sensing where a new generation of high-performance digital camera systems have been developed in the past few years. The direct georeferencing of the remote sensing platform is usually provided by state-of-the-art navigation technology based on the Global Positioning System/Inertial Measurement Unit (GPS/IMU). As the number of sensors has increased, the error budget calculations for derived geospatial products has become more complex; the number of contributing terms has shown a multi-fold increase. The focus of calibration has shifted from individual sensor calibration to system calibration to consider interrelationships between multiple sensors. At the system calibration level, even the error contributions of in-scene objects play an increased role and must be more carefully considered. Research to date has been primarily focused on individual sensor calibration or calibration of a pair of sensors. The objective of this investigation is to combine the effects of all the sensor and sensor-related errors to obtain the overall error budget of airborne remote sensing sensor suites. The error budget should include all navigation errors, imaging sensor modeling errors, inter-sensor calibration errors, and object space characteristics. 1
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.