285 research outputs found

    Wavelets and Face Recognition

    Get PDF

    Fusion-based impairment modelling for an intelligent radar sensor architecture

    Get PDF
    An intelligent radar sensor concept has been developed using a modelling approach for prediction of sensor performance, based on application of sensor and environment models. Land clutter significantly impacts on the operation of radar sensors operating at low-grazing angles. The clutter modelling technique developed in this thesis for the prediction of land clutter forms the clutter model for the intelligent radar sensor. Fusion of remote sensing data is integral to the clutter modelling approach and is addressed by considering fusion of radar remote sensing data, and mitigation of speckle noise and data transmission impairments. The advantages of the intelligent sensor approach for predicting radar performance are demonstrated for several applications using measured data. The problem of predicting site-specific land radar performance is an important task which is complicated by the peculiarities and characteristics of the radar sensor, electromagnetic wave propagation, and the environment in which the radar is deployed. Airborne remote sensing data can provide information about the environment and terrain, which can be used to more accurately predict land radar performance. This thesis investigates how fusion of remote sensing data can be used in conjunction with a sensor modelling approach to enable site-specific prediction of land radar performance. The application of a radar sensor model and a priori information about the environment, gives rise to the notion of an intelligent radar sensor which can adapt to dynamically changing environments through intelligent processing of this a priori knowledge. This thesis advances the field of intelligent radar sensor design, through an approach based on fusion of a priori knowledge provided by remote sensing data, and application of a modelling approach to enable prediction of radar sensor performance. Original contributions are made in the areas of intelligent radar sensor development, improved estimation of land surface clutter intensity for site-specific low-grazing angle radar, and fusion and mitigation of sensor and data transmission impairments in radar remote sensing data.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method

    Get PDF
    As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/

    Fusion-based impairment modelling for an intelligent radar sensor architecture

    Get PDF
    An intelligent radar sensor concept has been developed using a modelling approach for prediction of sensor performance, based on application of sensor and environment models. Land clutter significantly impacts on the operation of radar sensors operating at low-grazing angles. The clutter modelling technique developed in this thesis for the prediction of land clutter forms the clutter model for the intelligent radar sensor. Fusion of remote sensing data is integral to the clutter modelling approach and is addressed by considering fusion of radar remote sensing data, and mitigation of speckle noise and data transmission impairments. The advantages of the intelligent sensor approach for predicting radar performance are demonstrated for several applications using measured data. The problem of predicting site-specific land radar performance is an important task which is complicated by the peculiarities and characteristics of the radar sensor, electromagnetic wave propagation, and the environment in which the radar is deployed. Airborne remote sensing data can provide information about the environment and terrain, which can be used to more accurately predict land radar performance. This thesis investigates how fusion of remote sensing data can be used in conjunction with a sensor modelling approach to enable site-specific prediction of land radar performance. The application of a radar sensor model and a priori information about the environment, gives rise to the notion of an intelligent radar sensor which can adapt to dynamically changing environments through intelligent processing of this a priori knowledge. This thesis advances the field of intelligent radar sensor design, through an approach based on fusion of a priori knowledge provided by remote sensing data, and application of a modelling approach to enable prediction of radar sensor performance. Original contributions are made in the areas of intelligent radar sensor development, improved estimation of land surface clutter intensity for site-specific low-grazing angle radar, and fusion and mitigation of sensor and data transmission impairments in radar remote sensing data

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios
    • …
    corecore