2,108 research outputs found

    Accurate and automatic NOAA-AVHRR image navigation using a global contour matching approach

    Get PDF
    The problem of precise and automatic AVHRR image navigation is tractable in theory, but has proved to be somewhat difficult in practice. The authors' work has been motivated by the need for a fully automatic and operational navigation system capable of geo-referencing NOAA-AVHRR images with high accuracy and without operator supervision. The proposed method is based on the simultaneous use of an orbital model and a contour matching approach. This last process, relying on an affine transformation model, is used to correct the errors caused by inaccuracies in orbit modeling, nonzero value for the spacecraft's roll, pitch and yaw, errors due to inaccuracies in the satellite positioning and failures in the satellite internal clock. The automatic global contour matching process is summarized as follows: i) Estimation of the gradient energy map (edges) in the sensed image and detection of the cloudless (reliable) areas in this map. ii) Initialization of the affine model parameters by minimizing the Euclidean distance between the reference and sensed images objects. iii) Simultaneous optimization of all reference image contours on the sensed image by energy minimization in the domain of the global transformation parameters. The process is iterated in a hierarchical way, reducing the parameter searching space at each iteration. The proposed image navigation algorithm has proved to be capable of geo-referencing a satellite image within 1 pixel.Peer ReviewedPostprint (published version

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    Registration and Fusion of Multi-Spectral Images Using a Novel Edge Descriptor

    Full text link
    In this paper we introduce a fully end-to-end approach for multi-spectral image registration and fusion. Our method for fusion combines images from different spectral channels into a single fused image by different approaches for low and high frequency signals. A prerequisite of fusion is a stage of geometric alignment between the spectral bands, commonly referred to as registration. Unfortunately, common methods for image registration of a single spectral channel do not yield reasonable results on images from different modalities. For that end, we introduce a new algorithm for multi-spectral image registration, based on a novel edge descriptor of feature points. Our method achieves an accurate alignment of a level that allows us to further fuse the images. As our experiments show, we produce a high quality of multi-spectral image registration and fusion under many challenging scenarios

    Conversational Sensing

    Full text link
    Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it possible to represent information fusion and situational awareness as a conversational process among actors - human and machine agents - at or near the tactical edges of a network. Motivated by use cases in the domain of security, policing and emergency response, this paper presents an approach to information collection, fusion and sense-making based on the use of natural language (NL) and controlled natural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational protocol to facilitate a flow of collaborative messages from NL to CNL and back again in support of interactions such as: turning eyewitness reports from human observers into actionable information (from both trained and untrained sources); fusing information from humans and physical sensors (with associated quality metadata); and assisting human analysts to make the best use of available sensing assets in an area of interest (governed by management and security policies). CNL is used as a common formal knowledge representation for both machine and human agents to support reasoning, semantic information fusion and generation of rationale for inferences, in ways that remain transparent to human users. Examples are provided of various alternative styles for user feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a prototype conversational agent is able to gather usable CNL information from untrained human subjects

    On the possibility of automatic multisensor image registration

    Get PDF
    International audienceMultisensor image registration is needed in a large number of applications of remote sensing imagery. The accuracy achieved with usual methods (manual control points extraction, estimation of an analytical deformation model) is not satisfactory for many applications where a subpixel accuracy for each pixel of the image is needed (change detection or image fusion, for instance). Unfortunately, there are few works in the literature about the fine registration of multisensor images and even less about the extension of approaches similar to those based on fine correlation for the case of monomodal imagery. In this paper, we analyze the problem of the automatic multisensor image registration and we introduce similarity measures which can replace the correlation coefficient in a deformation map estimation scheme. We show an example where the deformation map between a radar image and an optical one is fully automatically estimated

    Groupwise Multimodal Image Registration using Joint Total Variation

    Get PDF
    In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv
    • …
    corecore