3,009 research outputs found

    Weighted and filtered mutual information: A Metric for the automated creation of panoramas from views of complex scenes

    Get PDF
    To contribute a novel approach in the field of image registration and panorama creation, this algorithm foregoes any scene knowledge, requiring only modest scene overlap and an acceptable amount of entropy within each overlapping view. The weighted and filtered mutual information (WFMI) algorithm has been developed for multiple stationary, color, surveillance video camera views and relies on color gradients for feature correspondence. This is a novel extension of well-established maximization of mutual information (MMI) algorithms. Where MMI algorithms are typically applied to high altitude photography and medical imaging (scenes with relatively simple shapes and affine relationships between views), the WFMI algorithm has been designed for scenes with occluded objects and significant parallax variation between non-affine related views. Despite these typically non-affine surveillance scenarios, searching in the affine space for a homography is a practical assumption that provides computational efficiency and accurate results, even with complex scene views. The WFMI algorithm can perfectly register affine views, performs exceptionally well with near-affine related views, and in complex scene views (well beyond affine constraints) the WFMI algorithm provides an accurate estimate of the overlap regions between the views. The WFMI algorithm uses simple calculations (vector field color gradient, Laplacian filtering, and feature histograms) to generate the WFMI metric and provide the optimal affine relationship. This algorithm is unique when compared to typical MMI algorithms and modern registration algorithms because it avoids almost all a priori knowledge and calculations, while still providing an accurate or useful estimate for realistic scenes. With mutual information weighting and the Laplacian filtering operation, the WFMI algorithm overcomes the failures of typical MMI algorithms in scenes where complex or occluded shapes do not provide sufficiently large peaks in the mutual information maps to determine the overlap region. This work has currently been applied to individual video frames and it will be shown that future work could easily extend the algorithm into utilizing motion information or temporal frame registrations to enhance scenes with smaller overlap regions, lower entropy, or even more significant parallax and occlusion variations between views

    Non-Rigid Registration via Global to Local Transformation

    Get PDF
    Non-rigid point set and image registration are key problems in plenty of computer vision and pattern recognition tasks. Typically, the non-rigid registration can be formulated as an optimization problem. However, registration accuracy is limited by local optimum. To solve this problem, we propose a method with global to local transformation for non-rigid point sets registration and it also can be used to infrared (IR) and visible (VIS) image registration. Firstly, an objective function based on Gaussian fields is designed to make a problem of non-rigid registration transform into an optimization problem. A global transformation model, which can describe the regular pattern of non-linear deformation between point sets, is then proposed to achieve coarse registration in global scale. Finally, with the results of coarse registration as initial value, a local transformation model is employed to implement fine registration by using local feature. Meanwhile, the optimal global and local transformation models estimated from edge points of IR and VIS image pairs are used to achieve non-rigid image registration. The qualitative and quantitative comparisons demonstrate that the proposed method has good performance under various types of distortions. Moreover, our method can also produce accurate results of IR and VIS image registration

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Techniques for automatic large scale change analysis of temporal multispectral imagery

    Get PDF
    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst\u27s job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change

    Advancing fluorescent contrast agent recovery methods for surgical guidance applications

    Get PDF
    Fluorescence-guided surgery (FGS) utilizes fluorescent contrast agents and specialized optical instruments to assist surgeons in intraoperatively identifying tissue-specific characteristics, such as perfusion, malignancy, and molecular function. In doing so, FGS represents a powerful surgical navigation tool for solving clinical challenges not easily addressed by other conventional imaging methods. With growing translational efforts, major hurdles within the FGS field include: insufficient tools for understanding contrast agent uptake behaviors, the inability to image tissue beyond a couple millimeters, and lastly, performance limitations of currently-approved contrast agents in accurately and rapidly labeling disease. The developments presented within this thesis aim to address such shortcomings. Current preclinical fluorescence imaging tools often sacrifice either 3D scale or spatial resolution. To address this gap in high-resolution, whole-body preclinical imaging tools available, the crux of this work lays on the development of a hyperspectral cryo-imaging system and image-processing techniques to accurately recapitulate high-resolution, 3D biodistributions in whole-animal experiments. Specifically, the goal is to correct each cryo-imaging dataset such that it becomes a useful reporter for whole-body biodistributions in relevant disease models. To investigate potential benefits of seeing deeper during FGS, we investigated short-wave infrared imaging (SWIR) for recovering fluorescence beyond the conventional top few millimeters. Through phantom, preclinical, and clinical SWIR imaging, we were able to 1) validate the capability of SWIR imaging with conventional NIR-I fluorophores, 2) demonstrate the translational benefits of SWIR-ICG angiography in a large animal model, and 3) detect micro-dose levels of an EGFR-targeted NIR-I probe during a Phase 0 clinical trial. Lastly, we evaluated contrast agent performances for FGS glioma resection and breast cancer margin assessment. To evaluate glioma-labeling performance of untargeted contrast agents, 3D agent biodistributions were compared voxel-by-voxel to gold-standard Gd-MRI and pathology slides. Finally, building on expertise in dual-probe ratiometric imaging at Dartmouth, a 10-pt clinical pilot study was carried out to assess the technique’s efficacy for rapid margin assessment. In summary, this thesis serves to advance FGS by introducing novel fluorescence imaging devices, techniques, and agents which overcome challenges in understanding whole-body agent biodistributions, recovering agent distributions at greater depths, and verifying agents’ performance for specific FGS applications

    The 5th International Conference on Biomedical Engineering and Biotechnology (ICBEB 2016)

    Get PDF
    corecore