139 research outputs found

    Design Considerations in the Development of an Automated Cartographic System

    Get PDF
    Cartography, the art of producing maps, is an extremely tedious job which is prone to human error and requires many hours for the completion of maps and their digital data bases. Cartography is a classic example of a job that needs to be automated. Through the new advances in image processing and pattern recognition, the automation of this task is made possible with the cartographer acting as a supervisor. This paper reviews current cartographic techniques, and examines design considerations for a fully automated cartographic system. The benefits of such a system would be improvements in speed, flexibility, and accuracy. The role of the cartographer, with such a system, would change to a process supervisor rather than that of a mass data entry

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    Dense Point-Cloud Representation of a Scene using Monocular Vision

    Get PDF
    We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings. We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique. We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased. Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques

    Literature review of the remote sensing of natural resources

    Get PDF
    Abstracts of 596 documents related to remote sensors or the remote sensing of natural resources by satellite, aircraft, or ground-based stations are presented. Topics covered include general theory, geology and hydrology, agriculture and forestry, marine sciences, urban land use, and instrumentation. Recent documents not yet cited in any of the seven information sources used for the compilation are summarized. An author/key word index is provided

    Using Linear Features for Aerial Image Sequence Mosaiking

    Get PDF
    With recent advances in sensor technology and digital image processing techniques, automatic image mosaicking has received increased attention in a variety of geospatial applications, ranging from panorama generation and video surveillance to image based rendering. The geometric transformation used to link images in a mosaic is the subject of image orientation, a fundamental photogrammetric task that represents a major research area in digital image analysis. It involves the determination of the parameters that express the location and pose of a camera at the time it captured an image. In aerial applications the typical parameters comprise two translations (along the x and y coordinates) and one rotation (rotation about the z axis). Orientation typically proceeds by extracting from an image control points, i.e. points with known coordinates. Salient points such as road intersections, and building corners are commonly used to perform this task. However, such points may contain minimal information other than their radiometric uniqueness, and, more importantly, in some areas they may be impossible to obtain (e.g. in rural and arid areas). To overcome this problem we introduce an alternative approach that uses linear features such as roads and rivers for image mosaicking. Such features are identified and matched to their counterparts in overlapping imagery. Our matching approach uses critical points (e.g. breakpoints) of linear features and the information conveyed by them (e.g. local curvature values and distance metrics) to match two such features and orient the images in which they are depicted. In this manner we orient overlapping images by comparing breakpoint representations of complete or partial linear features depicted in them. By considering broader feature metrics (instead of single points) in our matching scheme we aim to eliminate the effect of erroneous point matches in image mosaicking. Our approach does not require prior approximate parameters, which are typically an essential requirement for successful convergence of point matching schemes. Furthermore, we show that large rotation variations about the z-axis may be recovered. With the acquired orientation parameters, image sequences are mosaicked. Experiments with synthetic aerial image sequences are included in this thesis to demonstrate the performance of our approach

    Investigation of Computer Vision Concepts and Methods for Structural Health Monitoring and Identification Applications

    Get PDF
    This study presents a comprehensive investigation of methods and technologies for developing a computer vision-based framework for Structural Health Monitoring (SHM) and Structural Identification (St-Id) for civil infrastructure systems, with particular emphasis on various types of bridges. SHM is implemented on various structures over the last two decades, yet, there are some issues such as considerable cost, field implementation time and excessive labor needs for the instrumentation of sensors, cable wiring work and possible interruptions during implementation. These issues make it only viable when major investments for SHM are warranted for decision making. For other cases, there needs to be a practical and effective solution, which computer-vision based framework can be a viable alternative. Computer vision based SHM has been explored over the last decade. Unlike most of the vision-based structural identification studies and practices, which focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation, the proposed framework combines the vision-based structural input and the structural output from non-contact sensors to overcome the limitations given above. First, this study develops a series of computer vision-based displacement measurement methods for structural response (structural output) monitoring which can be applied to different infrastructures such as grandstands, stadiums, towers, footbridges, small/medium span concrete bridges, railway bridges, and long span bridges, and under different loading cases such as human crowd, pedestrians, wind, vehicle, etc. Structural behavior, modal properties, load carrying capacities, structural serviceability and performance are investigated using vision-based methods and validated by comparing with conventional SHM approaches. In this study, some of the most famous landmark structures such as long span bridges are utilized as case studies. This study also investigated the serviceability status of structures by using computer vision-based methods. Subsequently, issues and considerations for computer vision-based measurement in field application are discussed and recommendations are provided for better results. This study also proposes a robust vision-based method for displacement measurement using spatio-temporal context learning and Taylor approximation to overcome the difficulties of vision-based monitoring under adverse environmental factors such as fog and illumination change. In addition, it is shown that the external load distribution on structures (structural input) can be estimated by using visual tracking, and afterward load rating of a bridge can be determined by using the load distribution factors extracted from computer vision-based methods. By combining the structural input and output results, the unit influence line (UIL) of structures are extracted during daily traffic just using cameras from which the external loads can be estimated by using just cameras and extracted UIL. Finally, the condition assessment at global structural level can be achieved using the structural input and output, both obtained from computer vision approaches, would give a normalized response irrespective of the type and/or load configurations of the vehicles or human loads

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Efficient Algorithm for Railway Tracks Detection Using Satellite Imagery

    Full text link
    corecore