3,739 research outputs found

    Nonparametric image registration of airborne LiDAR, hyperspectral and photographic imagery of wooded landscapes

    Get PDF
    There is much current interest in using multisensor airborne remote sensing to monitor the structure and biodiversity of woodlands. This paper addresses the application of nonparametric (NP) image-registration techniques to precisely align images obtained from multisensor imaging, which is critical for the successful identification of individual trees using object recognition approaches. NP image registration, in particular, the technique of optimizing an objective function, containing similarity and regularization terms, provides a flexible approach for image registration. Here, we develop a NP registration approach, in which a normalized gradient field is used to quantify similarity, and curvature is used for regularization (NGF-Curv method). Using a survey of woodlands in southern Spain as an example, we show that NGF-Curv can be successful at fusing data sets when there is little prior knowledge about how the data sets are interrelated (i.e., in the absence of ground control points). The validity of NGF-Curv in airborne remote sensing is demonstrated by a series of experiments. We show that NGF-Curv is capable of aligning images precisely, making it a valuable component of algorithms designed to identify objects, such as trees, within multisensor data sets.This work was supported by the Airborne Research and Survey Facility of the U.K.’s Natural Environment Research Council (NERC) for collecting and preprocessing the data used in this research project [EU11/03/100], and by the grants supported from King Abdullah University of Science Technology and Wellcome Trust (BBSRC). D. Coomes was supported by a grant from NERC (NE/K016377/1) and funding from DEFRA and the BBSRC to develop methods for monitoring ash dieback from aircraft.This is the final version. It was first published by IEEE at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7116541&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_Publication_Number%3A36%29%26pageNumber%3D5

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis
    • …
    corecore