2,369 research outputs found

    In-Flight CCD Distortion Calibration for Pushbroom Satellites Based on Subpixel Correlation

    Get PDF
    We describe a method that allows for accurate inflight calibration of the interior orientation of any pushbroom camera and that in particular solves the problem of modeling the distortions induced by charge coupled device (CCD) misalignments. The distortion induced on the ground by each CCD is measured using subpixel correlation between the orthorectified image to be calibrated and an orthorectified reference image that is assumed distortion free. Distortions are modeled as camera defects, which are assumed constant over time. Our results show that in-flight interior orientation calibration reduces internal camera biases by one order of magnitude. In particular, we fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and we conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging. The derived calibration models have been integrated to the software package Coregistration of Optically Sensed Images and Correlation (COSI-Corr), freely available from the Caltech Tectonics Observatory website. Such calibration models are particularly useful in reducing biases in digital elevation models (DEMs) generated from stereo matching and in improving the accuracy of change detection algorithms

    Edge Detection with Sub-pixel Accuracy Based on Approximation of Edge with Erf Function

    Get PDF
    Edge detection is an often used procedure in digital image processing. For some practical applications is desirable to detect edges with sub-pixel accuracy. In this paper we present edge detection method for 1-D images based on approximation of real image function with Erf function. This method is verified by simulations and experiments for various numbers of samples of simulated and real images. Results of simulations and experiments are also used to compare proposed edge detection scheme with two often used moment-based edge detectors with sub-pixel precision

    Edge Detection in UAV Remote Sensing Images Using the Method Integrating Zernike Moments with Clustering Algorithms

    Get PDF
    Due to the unmanned aerial vehicle remote sensing images (UAVRSI) within rich texture details of ground objects and obvious phenomenon, the same objects with different spectra, it is difficult to effectively acquire the edge information using traditional edge detection operator. To solve this problem, an edge detection method of UAVRSI by combining Zernike moments with clustering algorithms is proposed in this study. To begin with, two typical clustering algorithms, namely, fuzzy c-means (FCM) and K-means algorithms, are used to cluster the original remote sensing images so as to form homogeneous regions in ground objects. Then, Zernike moments are applied to carry out edge detection on the remote sensing images clustered. Finally, visual comparison and sensitivity methods are adopted to evaluate the accuracy of the edge information detected. Afterwards, two groups of experimental data are selected to verify the proposed method. Results show that the proposed method effectively improves the accuracy of edge information extracted from remote sensing images

    Divergence Model for Measurement of Goos-Hanchen Shift

    Get PDF
    In this effort a new measurement technique for the lateral Goos-Hanchen shift is developed, analyzed, and demonstrated. The new technique uses classical image formation methods fused with modern detection and analysis methods to achieve higher levels of sensitivity than obtained with prior practice. Central to the effort is a new mathematical model of the dispersion seen at a step shadow when the Goos-Hanchen effect occurs near critical angle for total internal reflection. Image processing techniques are applied to measure the intensity distribution transfer function of a new divergence model of the Goos-Hanchen phenomena providing verification of the model. This effort includes mathematical modeling techniques, analytical derivations of governing equations, numerical verification of models and sensitivities, optical design of apparatus, image processin

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Investigation of Computer Vision Concepts and Methods for Structural Health Monitoring and Identification Applications

    Get PDF
    This study presents a comprehensive investigation of methods and technologies for developing a computer vision-based framework for Structural Health Monitoring (SHM) and Structural Identification (St-Id) for civil infrastructure systems, with particular emphasis on various types of bridges. SHM is implemented on various structures over the last two decades, yet, there are some issues such as considerable cost, field implementation time and excessive labor needs for the instrumentation of sensors, cable wiring work and possible interruptions during implementation. These issues make it only viable when major investments for SHM are warranted for decision making. For other cases, there needs to be a practical and effective solution, which computer-vision based framework can be a viable alternative. Computer vision based SHM has been explored over the last decade. Unlike most of the vision-based structural identification studies and practices, which focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation, the proposed framework combines the vision-based structural input and the structural output from non-contact sensors to overcome the limitations given above. First, this study develops a series of computer vision-based displacement measurement methods for structural response (structural output) monitoring which can be applied to different infrastructures such as grandstands, stadiums, towers, footbridges, small/medium span concrete bridges, railway bridges, and long span bridges, and under different loading cases such as human crowd, pedestrians, wind, vehicle, etc. Structural behavior, modal properties, load carrying capacities, structural serviceability and performance are investigated using vision-based methods and validated by comparing with conventional SHM approaches. In this study, some of the most famous landmark structures such as long span bridges are utilized as case studies. This study also investigated the serviceability status of structures by using computer vision-based methods. Subsequently, issues and considerations for computer vision-based measurement in field application are discussed and recommendations are provided for better results. This study also proposes a robust vision-based method for displacement measurement using spatio-temporal context learning and Taylor approximation to overcome the difficulties of vision-based monitoring under adverse environmental factors such as fog and illumination change. In addition, it is shown that the external load distribution on structures (structural input) can be estimated by using visual tracking, and afterward load rating of a bridge can be determined by using the load distribution factors extracted from computer vision-based methods. By combining the structural input and output results, the unit influence line (UIL) of structures are extracted during daily traffic just using cameras from which the external loads can be estimated by using just cameras and extracted UIL. Finally, the condition assessment at global structural level can be achieved using the structural input and output, both obtained from computer vision approaches, would give a normalized response irrespective of the type and/or load configurations of the vehicles or human loads
    • 

    corecore