210,747 research outputs found

    Image fusion for the detection of camouflaged people

    Get PDF
    The use of thermal imaging is a benefit for the Armed Forces. Due to their advantages, they have a large number of applications, including the detection of camouflaged people. For better results, the thermal information can be merged with the color information which allows a greater detail, resulting in a greater degree of security. The present study implemented as pixel level image fusion methods: Principal Components Analysis; Laplacian Pyramid; and Discrete Wavelet Transform. A qualitative analysis concluded that the method which performs better is the one that uses Wavelets, followed by the Laplacian Pyramid and finally the PCA. A quantitative analysis was made using as performance metrics: Standard Deviation, Entropy, Spatial Frequency, Mutual Information, Fusion Quality Index and Structural Similarity Index. The values obtained support the conclusions drawn from the qualitative analysis. The Mutual Information, Fusion Quality Index and Structural Similarity Index are the appropriate metrics to measure the quality of image fusion as they take into account the relationship between the fused image and the input images.info:eu-repo/semantics/publishedVersio

    Intensity-Based Image Registration Using Robust Correlation Coefficients

    Full text link
    The ordinary sample correlation coefficient is a popular similarity measure for aligning images from the same or similar modalities. However, this measure can be sensitive to the presence of “outlier” objects that appear in one image but not the other, such as surgical instruments, the patient table, etc., which can lead to biased registrations. This paper describes an intensity-based image registration technique that uses a robust correlation coefficient as a similarity measure. Relative to the ordinary sample correlation coefficient, the proposed similarity measure reduces the influence of outliers. We also compared the performance of the proposed method with the mutual information- based method. The robust correlation-based method should be useful for image registration in radiotherapy (KeV to MeV X-ray images) and image-guided surgery applications. We have investigated the properties of the proposed method by theoretical analysis, computer simulations, a phantom experiment, and with functional magnetic resonance imaging data.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85801/1/Fessler55.pd

    Change Detection in Mangrove Forest Area Using Local Mutual Information

    Get PDF
    This thesis unveils the potential and utilization of similarity measure for forest change detection. A new simple similarity approach based on local mutual information is used to detect any significant changes in the image of forest areas. Point similarity measure is defined as a measure which is used to calculate the similarity of individual pixels. The basic idea of the proposed method is that any change pixel will be maximally dissimilar, i.e. the value of similarity of these pixels will be low. The method has been tested to detect and identify changes caused by plant growth and plant loss in four sub-areas of Matang Mangrove Forest, Perak. Image of SPOT 5 satellite taken from band 1, band 2,band 3, and band 4 with the resolution of 10meter dated on 5 August 2005 and 13 June 2007 has been used to test the method. It is then compared with the results of Principal Component Analysis 1 (PCA 1). The plant loss areas has been successfully identified as any pixel with the value of local mutual information less than and equals to zero. The method has been refined to accurately detect changes caused by the growth areas by thresholding the histogram of the average percentage of difference between joint probability and marginal probability. Results from the experiment showed that a threshold value of zero is the best threshold value to identify between changed and unchanged areas in all cases of the images. In overall, band 3 gives the best results of forest change detection compared to the other bands in all cases. Compared to the image differencing and normalized differenced vegetation index (NDVI), the proposed method not only can solve the problem on selecting the threshold value but also provides the highest percentage of successful classification at the fourth, second and first study area with the value of 95.07%, 89.47% and 87.66% respectively. From the results, it has been concluded that local mutual information is not only can be effectively used for change detection technique but also can be used to classify the plant growth and plant loss areas

    Efficient Image Registration using Fast Principal Component Analysis

    Get PDF
    Incorporating spatial features with mutual information (MI) has demonstrated superior image registration performance compared with traditional MI-based methods, particularly in the presence of noise and intensity non-uniformities (INU). This paper presents a new efficient MI-based similarity measure which applies Expectation Maximisation for Principal Component Analysis (EMPCA-MI), to afford significantly lower computational complexity, while providing analogous image registration performance with other feature-based MI solutions. Experimental analysis corroborates both the improved robustness and faster runtimes of EMPCA-MI, for different test datasets containing both INU and noise artefacts

    Enhanced retinal image registration accuracy using expectation maximisation and variable bin-sized mutual information

    Get PDF
    While retinal images (RI) assist in the diagnosis of various eye conditions and diseases such as glaucoma and diabetic retinopathy, their innate features including low contrast homogeneous and nonuniformly illuminated regions, present a particular challenge for retinal image registration (RIR). Recently, the hybrid similarity measure, Expectation Maximization for Principal Component Analysis with Mutual Information (EMPCA-MI) has been proposed for RIR. This paper investigates incorporating various fixed and adaptive bin size selection strategies to estimate the probability distribution in the mutual information (MI) stage of EMPCA-MI, and analyses their corresponding effect upon RIR performance. Experimental results using a clinical mono-modal RI dataset confirms that adaptive bin size selection consistently provides both lower RIR errors and superior robustness compared to the empirically determined fixed bin sizes

    Multimodal retinal image registration using a fast principal component analysis hybrid-based similarity measure

    Get PDF
    Multimodal retinal images (RI) are extensively used for analysing various eye diseases and conditions such as myopia and diabetic retinopathy. The incorporation of either two or more RI modalities provides complementary structure information in the presence of non-uniform illumination and low-contrast homogeneous regions. It also presents significant challenges for retinal image registration (RIR). This paper investigates how the Expectation Maximization for Principal Component Analysis with Mutual Information (EMPCA-MI) algorithm can effectively achieve multimodal RIR. This iterative hybrid-based similarity measure combines spatial features with mutual information to provide enhanced registration without recourse to either segmentation or feature extraction. Experimental results for clinical multimodal RI datasets comprising colour fundus and scanning laser ophthalmoscope images confirm EMPCA-MI is able to consistently afford superior numerical and qualitative registration performance compared with existing RIR techniques, such as the bifurcation structures method

    Methods for multi-spectral image fusion: identifying stable and repeatable information across the visible and infrared spectra

    Get PDF
    Fusion of images captured from different viewpoints is a well-known challenge in computer vision with many established approaches and applications; however, if the observations are captured by sensors also separated by wavelength, this challenge is compounded significantly. This dissertation presents an investigation into the fusion of visible and thermal image information from two front-facing sensors mounted side-by-side. The primary focus of this work is the development of methods that enable us to map and overlay multi-spectral information; the goal is to establish a combined image in which each pixel contains both colour and thermal information. Pixel-level fusion of these distinct modalities is approached using computational stereo methods; the focus is on the viewpoint alignment and correspondence search/matching stages of processing. Frequency domain analysis is performed using a method called phase congruency. An extensive investigation of this method is carried out with two major objectives: to identify predictable relationships between the elements extracted from each modality, and to establish a stable representation of the common information captured by both sensors. Phase congruency is shown to be a stable edge detector and repeatable spatial similarity measure for multi-spectral information; this result forms the basis for the methods developed in the subsequent chapters of this work. The feasibility of automatic alignment with sparse feature-correspondence methods is investigated. It is found that conventional methods fail to match inter-spectrum correspondences, motivating the development of an edge orientation histogram (EOH) descriptor which incorporates elements of the phase congruency process. A cost function, which incorporates the outputs of the phase congruency process and the mutual information similarity measure, is developed for computational stereo correspondence matching. An evaluation of the proposed cost function shows it to be an effective similarity measure for multi-spectral information

    Towards Identification of Relevant Variables in the observed Aerosol Optical Depth Bias between MODIS and AERONET observations

    Full text link
    Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program
    corecore