332 research outputs found

    Visual Task Performance Assessment using Complementary and Redundant Information within Fused Imagery

    Get PDF
    Image fusion is the process of combining information from a set of source images to obtain a single image with more relevant information than any individual source image. The intent of image fusion is to produce a single image that renders a better description of the scene than any of the individual source images. Information within source images can be classified as either redundant or complementary. The relevant amounts of complementary and redundant information within the source images provide an effective metric for quantifying the benefits of image fusion. Two common reasons for using image fusion for a particular task are to increase task reliability or to increase capability. It seems natural to associate reliability with redundancy of information between source bands, whereas increased capability is associated with complementary information between source bands. The basic idea is that the more redundant the information between the source images being fused, the less likely an increase in task performance can be realized using the fused imagery. Intuitively, the benefits of image fusion with regards to task performance are maximized when the source images contain large amounts of complementary information. This research introduces a new performance measure based on mutual information which, under the assumption the fused imagery has been properly prepared for human perception, can be used as a predictor of human task performance using the complementary and redundant information in fused imagery. The ability of human observers to identify targets of interest using fused imagery is evaluated using human perception experiments. In the perception experiments, imagery of the same scenes containing targets of interest, captured in different spectral bands, is fused using various fusion algortihms and shown to human observers for identification. The results of the experiments show a correlation exists between the proposed measure and human visual identification task performance. The perception experiments serve to validate the performance prediction accuracy of the new performance measure. the development of the proposed metric introduces into the image fusion community a new image fusion evaluation measure that has the potential to fill many voids within the image fusion literature

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    A Low-cost Depth Imaging Mobile Platform for Canola Phenotyping

    Get PDF
    To meet the high demand for supporting and accelerating progress in the breeding of novel traits, plant scientists and breeders have to measure a large number of plants and their characteristics accurately. A variety of imaging methodologies are being deployed to acquire data for quantitative studies of complex traits. When applied to a large number of plants such as canola plants, however, a complete three-dimensional (3D) model is time-consuming and expensive for high-throughput phenotyping with an enormous amount of data. In some contexts, a full rebuild of entire plants may not be necessary. In recent years, many 3D plan phenotyping techniques with high cost and large-scale facilities have been introduced to extract plant phenotypic traits, but these applications may be affected by limited research budgets and cross environments. This thesis proposed a low-cost depth and high-throughput phenotyping mobile platform to measure canola plant traits in cross environments. Methods included detecting and counting canola branches and seedpods, monitoring canola growth stages, and fusing color images to improve images resolution and achieve higher accuracy. Canola plant traits were examined in both controlled environment and field scenarios. These methodologies were enhanced by different imaging techniques. Results revealed that this phenotyping mobile platform can be used to investigate canola plant traits in cross environments with high accuracy. The results also show that algorithms for counting canola branches and seedpods enable crop researchers to analyze the relationship between canola genotypes and phenotypes and estimate crop yields. In addition to counting algorithms, fusing techniques can be helpful for plant breeders with more comfortable access plant characteristics by improving the definition and resolution of color images. These findings add value to the automation, low-cost depth and high-throughput phenotyping for canola plants. These findings also contribute a novel multi-focus image fusion that exhibits a competitive performance with outperforms some other state-of-the-art methods based on the visual saliency maps and gradient domain fast guided filter. This proposed platform and counting algorithms can be applied to not only canola plants but also other closely related species. The proposed fusing technique can be extended to other fields, such as remote sensing and medical image fusion

    Thermal Cameras and Applications:A Survey

    Get PDF

    Color in scientific visualization: Perception and image-based data display

    Get PDF
    Visualization is the transformation of information into a visual display that enhances users understanding and interpretation of the data. This thesis project has investigated the use of color and human vision modeling for visualization of image-based scientific data. Two preliminary psychophysical experiments were first conducted on uniform color patches to analyze the perception and understanding of different color attributes, which provided psychophysical evidence and guidance for the choice of color space/attributes for color encoding. Perceptual color scales were then designed for univariate and bivariate image data display and their effectiveness was evaluated through three psychophysical experiments. Some general guidelines were derived for effective color scales design. Extending to high-dimensional data, two visualization techniques were developed for hyperspectral imagery. The first approach takes advantage of the underlying relationships between PCA/ICA of hyperspectral images and the human opponent color model, and maps the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCbCr, and YUV. The gray world assumption was adopted to automatically set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes. The second approach combines a true color image and a PCA image based on a biologically inspired visual attention model that simulates the center-surround structure of visual receptive fields as the difference between fine and coarse scales. The model was extended to take into account human contrast sensitivity and include high-level information such as the second order statistical structure in the form of local variance map, in addition to low-level features such as color, luminance, and orientation. It generates a topographic saliency map for both the true color image and the PCA image, a difference map is then derived and used as a mask to select interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve consistent natural appearance of the scene, while the selected attentional locations may be analyzed by more advanced algorithms

    Assessing the Impact of Game Day Schedule and Opponents on Travel Patterns and Route Choice using Big Data Analytics

    Get PDF
    The transportation system is crucial for transferring people and goods from point A to point B. However, its reliability can be decreased by unanticipated congestion resulting from planned special events. For example, sporting events collect large crowds of people at specific venues on game days and disrupt normal traffic patterns. The goal of this study was to understand issues related to road traffic management during major sporting events by using widely available INRIX data to compare travel patterns and behaviors on game days against those on normal days. A comprehensive analysis was conducted on the impact of all Nebraska Cornhuskers football games over five years on traffic congestion on five major routes in Nebraska. We attempted to identify hotspots, the unusually high-risk zones in a spatiotemporal space containing traffic congestion that occur on almost all game days. For hotspot detection, we utilized a method called Multi-EigenSpot, which is able to detect multiple hotspots in a spatiotemporal space. With this algorithm, we were able to detect traffic hotspot clusters on the five chosen routes in Nebraska. After detecting the hotspots, we identified the factors affecting the sizes of hotspots and other parameters. The start time of the game and the Cornhuskers’ opponent for a given game are two important factors affecting the number of people coming to Lincoln, Nebraska, on game days. Finally, the Dynamic Bayesian Networks (DBN) approach was applied to forecast the start times and locations of hotspot clusters in 2018 with a weighted mean absolute percentage error (WMAPE) of 13.8%

    X-Ray Image Processing and Visualization for Remote Assistance of Airport Luggage Screeners

    Get PDF
    X-ray technology is widely used for airport luggage inspection nowadays. However, the ever-increasing sophistication of threat-concealment measures and types of threats, together with the natural complexity, inherent to the content of each individual luggage make x-ray raw images obtained directly from inspection systems unsuitable to clearly show various luggage and threat items, particularly low-density objects, which poses a great challenge for airport screeners. This thesis presents efforts spent in improving the rate of threat detection using image processing and visualization technologies. The principles of x-ray imaging for airport luggage inspection and the characteristics of single-energy and dual-energy x-ray data are first introduced. The image processing and visualization algorithms, selected and proposed for improving single energy and dual energy x-ray images, are then presented in four categories: (1) gray-level enhancement, (2) image segmentation, (3) pseudo coloring, and (4) image fusion. The major contributions of this research include identification of optimum combinations of common segmentation and enhancement methods, HSI based color-coding approaches and dual-energy image fusion algorithms —spatial information-based and wavelet-based image fusions. Experimental results generated with these image processing and visualization algorithms are shown and compared. Objective image quality measures are also explored in an effort to reduce the overhead of human subjective assessments and to provide more reliable evaluation results. Two application software are developed − an x-ray image processing application (XIP) and a wireless tablet PC-based remote supervision system (RSS). In XIP, we implemented in a user-friendly GUI the preceding image processing and visualization algorithms. In RSS, we ported available image processing and visualization methods to a wireless mobile supervisory station for screener assistance and supervision. Quantitative and on-site qualitative evaluations for various processed and fused x-ray luggage images demonstrate that using the proposed algorithms of image processing and visualization constitutes an effective and feasible means for improving airport luggage inspection

    Spectral-spatial Feature Extraction for Hyperspectral Image Classification

    Get PDF
    As an emerging technology, hyperspectral imaging provides huge opportunities in both remote sensing and computer vision. The advantage of hyperspectral imaging comes from the high resolution and wide range in the electromagnetic spectral domain which reflects the intrinsic properties of object materials. By combining spatial and spectral information, it is possible to extract more comprehensive and discriminative representation for objects of interest than traditional methods, thus facilitating the basic pattern recognition tasks, such as object detection, recognition, and classification. With advanced imaging technologies gradually available for universities and industry, there is an increased demand to develop new methods which can fully explore the information embedded in hyperspectral images. In this thesis, three spectral-spatial feature extraction methods are developed for salient object detection, hyperspectral face recognition, and remote sensing image classification. Object detection is an important task for many applications based on hyperspectral imaging. While most traditional methods rely on the pixel-wise spectral response, many recent efforts have been put on extracting spectral-spatial features. In the first approach, we extend Itti's visual saliency model to the spectral domain and introduce the spectral-spatial distribution based saliency model for object detection. This procedure enables the extraction of salient spectral features in the scale space, which is related to the material property and spatial layout of objects. Traditional 2D face recognition has been studied for many years and achieved great success. Nonetheless, there is high demand to explore unrevealed information other than structures and textures in spatial domain in faces. Hyperspectral imaging meets such requirements by providing additional spectral information on objects, in completion to the traditional spatial features extracted in 2D images. In the second approach, we propose a novel 3D high-order texture pattern descriptor for hyperspectral face recognition, which effectively exploits both spatial and spectral features in hyperspectral images. Based on the local derivative pattern, our method encodes hyperspectral faces with multi-directional derivatives and binarization function in spectral-spatial space. Compared to traditional face recognition methods, our method can describe distinctive micro-patterns which integrate the spatial and spectral information of faces. Mathematical morphology operations are limited to extracting spatial feature in two-dimensional data and cannot cope with hyperspectral images due to so-called ordering problem. In the third approach, we propose a novel multi-dimensional morphology descriptor, tensor morphology profile~(TMP), for hyperspectral image classification. TMP is a general framework to extract multi-dimensional structures in high-dimensional data. The n-order morphology profile is proposed to work with the n-order tensor, which can capture the inner high order structures. By treating a hyperspectral image as a tensor, it is possible to extend the morphology to high dimensional data so that powerful morphological tools can be used to analyze hyperspectral images with fused spectral-spatial information. At last, we discuss the sampling strategy for the evaluation of spectral-spatial methods in remote sensing hyperspectral image classification. We find that traditional pixel-based random sampling strategy for spectral processing will lead to unfair or biased performance evaluation in the spectral-spatial processing context. When training and testing samples are randomly drawn from the same image, the dependence caused by overlap between them may be artificially enhanced by some spatial processing methods. It is hard to determine whether the improvement of classification accuracy is caused by incorporating spatial information into the classifier or by increasing the overlap between training and testing samples. To partially solve this problem, we propose a novel controlled random sampling strategy for spectral-spatial methods. It can significantly reduce the overlap between training and testing samples and provides more objective and accurate evaluation
    • …
    corecore