5,408 research outputs found

    Data Driven Multispectral Image Registration Framework

    Get PDF
    Multispectral imaging is widely used in remote sensing applications from UAVs and ground-based platforms. Multispectral cameras often use a physically different camera for each wavelength causing misalignment in the images for different imaging bands. This misalignment must be corrected prior to concurrent multi-band image analysis. The traditional approach for multispectral image registration process is to select a target channel and register all other image channels to the target. There is no objective evidence-based method to select a target channel. The possibility of registration to some intermediate channel before registering to the target is not usually considered, but could be beneficial if there is no target channel for which direct registration performs well for every other channel. In this paper, we propose an automatic data-driven multispectral image registration framework that determines a target channel, and possible intermediate registration steps based on the assumptions that 1) some reasonable minimum number of control-points correspondences between two channels is needed to ensure a low-error registration; 2) a greater number of such correspondences generally results in higher registration performance. Our prototype is tested on five multispectral datasets captured with UAV-mounted multispectral cameras. The output of the prototype is a registration scheme in the form of a directed acyclic graph (actually a tree) that represents the target channel and the process to register other image channels. The resulting registration schemes had more control point correspondences on average than the traditional register-all-to-one-targetchannel approach. Data-driven registration scheme consistently showed low back-projection error across all the image channel pairs in most of the experiments. Our data-driven framework has generated registration schemes with the best control point extraction algorithm for each image channel pair and registering images in a data-driven approach. The data-driven image registration framework is dataset independent, and it performs on datasets with any number of image channels. With the growing need of remote sensing and the lack of a proper evidence-based method to register multispectral image channels, a data-driven registration framework is an essential tool in the field of image registration and multispectral imaging

    Multispectral Deep Neural Networks for Pedestrian Detection

    Full text link
    Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11% and yields a missing rate 3.5% lower than the other proposed architectures.Comment: 13 pages, 8 figures, BMVC 2016 ora

    A brief description of an Earth Resources Technology Satellite (ERTS) computer data analysis and management program

    Get PDF
    A data analysis and management procedure currently being used at Marshall Space Flight Center to analyze ERTS digital data is described. The objective is to acquaint potential users with the various computer programs that are available for analysis of multispectral digital imagery and to show how these programs are used in the overall data management plan. The report contains a brief description of each computer routine, and references are provided for obtaining more detailed information

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy

    Get PDF
    Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table

    Atmospheric and Oceanographic Information Processing System (AOIPS) system description

    Get PDF
    The development of hardware and software for an interactive, minicomputer based processing and display system for atmospheric and oceanographic information extraction and image data analysis is described. The major applications of the system are discussed as well as enhancements planned for the future

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    • 

    corecore