26 research outputs found

    Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images

    Get PDF
    Detecting change areas among two or more remote sensing images is a key technique in remote sensing. It usually consists of generating and analyzing a difference image thus to produce a change map. Analyzing the difference image to obtain the change map is essentially a binary classification problem, and can be solved by optimization algorithms. This paper proposes an accelerated genetic algorithm based on search-space decomposition (SD-aGA) for change detection in remote sensing images. Firstly, the BM3D algorithm is used to preprocess the remote sensing image to enhance useful information and suppress noises. The difference image is then obtained using the logarithmic ratio method. Secondly, after saliency detection, fuzzy c-means algorithm is conducted on the salient region detected in the difference image to identify the changed, unchanged and undetermined pixels. Only those undetermined pixels are considered by the optimization algorithm, which reduces the search space significantly. Inspired by the idea of the divide-and-conquer strategy, the difference image is decomposed into sub-blocks with a method similar to down-sampling, where only those undetermined pixels are analyzed and optimized by SD-aGA in parallel. The category labels of the undetermined pixels in each sub-block are optimized according to an improved objective function with neighborhood information. Finally the decision results of the category labels of all the pixels in the sub-blocks are remapped to their original positions in the difference image and then merged globally. Decision fusion is conducted on each pixel based on the decision results in the local neighborhood to produce the final change map. The proposed method is tested on six diverse remote sensing image benchmark datasets and compared against six state-of-the-art methods. Segmentations on the synthetic image and natural image corrupted by different noise are also carried out for comparison. Results demonstrate the excellent performance of the proposed SD-aGA on handling noises and detecting the changed areas accurately. In particular, compared with the traditional genetic algorithm, SD-aGA can obtain a much higher degree of detection accuracy with much less computational time

    Change detection in SAR images based on the salient map guidance and an accelerated genetic algorithm

    Get PDF
    This paper proposes a change detection algorithm in synthetic aperture radar (SAR) images based on the salient image guidance and an accelerated genetic algorithm (S-aGA). The difference image is first generated by logarithm ratio operator based on the bi-temporal SAR images acquired in the same region. Then a saliency detection model is applied in the difference image to extract the salient regions containing the changed class pixels. The salient regions are further divided by fuzzy c-means (FCM) clustering algorithm into three categories: changed class (set of pixels with high gray values), unchanged class (set of pixels with low gray values) and undetermined class (set of pixels with middle gray value, which are difficult to classify). Finally, the proposed accelerated GA is applied to explore the reduced search space formed by the undetermined-class pixels according to an objective function considering neighborhood information. In S-aGA, an efficient mutation operator is designed by using the neighborhood information of undetermined-class pixels as the heuristic information to determine the mutation probability of each undetermined-class pixel adaptively, which accelerates the convergence of the GA significantly. The experimental results on two data sets demonstrate the efficiency of the proposed S-aGA. On the whole, S-aGA outperforms five other existing methods including the simple GA in terms of detection accuracy. In addition, S-aGA could obtain satisfying solution within limited generations, converging much faster than the simple GA

    Advances in Motion Estimators for Applications in Computer Vision

    Get PDF
    abstract: Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained. The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies. In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data. In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets. In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Comparative model for classification of forest degradation

    Get PDF
    The challenges of forest degradation together with its related effects have attracted research from diverse disciplines, resulting in different definitions of the concept. However, according to a number of researchers, the central element of this issue is human intrusion that destroys the state of the environment. Therefore, the focus of this research is to develop a comparative model using a large amount of multi-spectral remote sensing data, such as IKONOS, QUICKBIRD, SPOT, WORLDVIEW-1, Terra-SARX, and fused data to detect forest degradation in Cameron Highlands. The output of this method in line with the performance measurement model. In order to identify the best data, fused data and technique to be employed. Eleven techniques have been used to develop a Comparative technique by applying them on fifteen sets of data. The output of the Comparative technique was used to feed the performance measurement model in order to enhance the accuracy of each classification technique. Moreover, a Performance Measurement Model has been used to verify the results of the Comparative technique; and, these outputs have been validated using the reflectance library. In addition, the conceptual hybrid model proposed in this research will give the opportunity for researchers to establish a fully automatic intelligent model for future work. The results of this research have demonstrated the Neural Network (NN) to be the best Intelligent Technique (IT) with a 0.912 of the Kappa coefficient and 96% of the overall accuracy, Mahalanobis had a 0.795 of the Kappa coefficient and 88% of the overall accuracy and the Maximum likelihood (ML) had a 0.598 of the Kappa coefficient and 72% of the overall accuracy from the best fused image used in this research, which was represented by fusing the IKONOS image with the QUICKBIRD image as finally employed in the Comparative technique for improving the detectability of forest change

    Computational Techniques of Oil Spill Detection in Synthetic Aperture Radar Data: Review Cases

    Get PDF
    In this chapter, a major role of environmental assessment is an oil spill identifies or detected from the coastal region surfaces or marine surroundings. Normally, the oil spills on the coastal regions impact their characteristics of environmental activities. However, these activities are monitoring through several radar satellites and sensor. For those achievable activities detecting or identifying, many researchers developed several approaches. Particularly, this chapter discusses about the detection of oil spill current operational effects on coastal region surfaces. In addition, the current research operations of oil spill characterizations and quality of its impacts, effects of current environmental bio-systems, their control measurement strategies, and its surveillance operations are discussed. Finally, the oil spill detection is done through the SAR image region classification based on its feature extraction. This could be monitored from the image dark region selection through remote sensing techniques

    Artificial Neural Networks and Evolutionary Computation in Remote Sensing

    Get PDF
    Artificial neural networks (ANNs) and evolutionary computation methods have been successfully applied in remote sensing applications since they offer unique advantages for the analysis of remotely-sensed images. ANNs are effective in finding underlying relationships and structures within multidimensional datasets. Thanks to new sensors, we have images with more spectral bands at higher spatial resolutions, which clearly recall big data problems. For this purpose, evolutionary algorithms become the best solution for analysis. This book includes eleven high-quality papers, selected after a careful reviewing process, addressing current remote sensing problems. In the chapters of the book, superstructural optimization was suggested for the optimal design of feedforward neural networks, CNN networks were deployed for a nanosatellite payload to select images eligible for transmission to ground, a new weight feature value convolutional neural network (WFCNN) was applied for fine remote sensing image segmentation and extracting improved land-use information, mask regional-convolutional neural networks (Mask R-CNN) was employed for extracting valley fill faces, state-of-the-art convolutional neural network (CNN)-based object detection models were applied to automatically detect airplanes and ships in VHR satellite images, a coarse-to-fine detection strategy was employed to detect ships at different sizes, and a deep quadruplet network (DQN) was proposed for hyperspectral image classification

    The use of contextual techniques and textural analysis of satellite imagery in geological studies of arid regions

    Get PDF
    This Thesis examines the problem of extracting spatial information (context and texture) of use to the geologist, from satellite imagery. Part of the Arabian Shield was chosen to be the study area. Two new contextual techniques; (a) Ripping Membrane and (b) Rolling Ball were developed and examined in this study. Both new contextual based techniques proved to be excellent tools for visual detection and analysis of lineaments, and were clearly better than the 'traditional' spatial filtration technique. This study revealed structural lineaments, mostly mapped for the first time, which are clearly related to regional tectonic history of the area. Contextual techniques were used to perform image segmentation. Two different image segmentation methods were developed and examined in this study. These methods were the automatic watershed segmentation and ripping membrane/Laserscan system method (as this method was being used for the first time). The second method produced high accuracy results for four selected test sites. A new automatic lineament extraction method using the above contextual techniques was developed. The aim of the method was to produce an automatic lineament map and the azimuth direction of these lineaments in each rock type, as defined by the segmented regions. 75-85% of the visually traced lineaments were extracted by the automatic method. The automatic method appears to give a dominant trend slightly different (10° — 15°) from the visually determined trend. It was demonstrated that not all the different types of rock could be discriminated using the spectral image enhancement techniques (band ratio, principal components and decorrelation stretch). Therefore, the spatial grey level dependency matrix (SGLDM) was used to produce a texture feature image, which would enable distinctions to be made and overcome the limitations of spectral enhancement techniques. The SGLDM did not produce any useful texture features which can discriminate between every rock type in the selected test sites. It did, however, show some acceptable texture discrimination between some rock types. The remote sensing data examined in this thesis were the Landsat (multispectral scanner, Thematic Mapper), SPOT, and Shuttle Imaging Radar (SIR-B)

    BagStack Classification for Data Imbalance Problems with Application to Defect Detection and Labeling in Semiconductor Units

    Get PDF
    abstract: Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem is driven by the problem itself, the data availability and many other requirements. Automated visual inspection (AVI) systems represent a major part of these challenging computer vision applications. They are gaining growing interest in the manufacturing industry to detect defective products and keep these from reaching customers. The process of defect detection and classification in semiconductor units is challenging due to different acceptable variations that the manufacturing process introduces. Other variations are also typically introduced when using optical inspection systems due to changes in lighting conditions and misalignment of the imaged units, which makes the defect detection process more challenging. In this thesis, a BagStack classification framework is proposed, which makes use of stacking and bagging concepts to handle both variance and bias errors. The classifier is designed to handle the data imbalance and overfitting problems by adaptively transforming the multi-class classification problem into multiple binary classification problems, applying a bagging approach to train a set of base learners for each specific problem, adaptively specifying the number of base learners assigned to each problem, adaptively specifying the number of samples to use from each class, applying a novel data-imbalance aware cross-validation technique to generate the meta-data while taking into account the data imbalance problem at the meta-data level and, finally, using a multi-response random forest regression classifier as a meta-classifier. The BagStack classifier makes use of multiple features to solve the defect classification problem. In order to detect defects, a locally adaptive statistical background modeling is proposed. The proposed BagStack classifier outperforms state-of-the-art image classification techniques on our dataset in terms of overall classification accuracy and average per-class classification accuracy. The proposed detection method achieves high performance on the considered dataset in terms of recall and precision.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Automatic analysis of medical images for change detection in prostate cancer

    Get PDF
    Prostate cancer is the most common cancer and second most common cause of cancer death in men in the UK. However, the patient risk from the cancer can vary considerably, and the widespread use of prostate-specific antigen (PSA) screening has led to over-diagnosis and over-treatment of low-grade tumours. It is therefore important to be able to differentiate high-grade prostate cancer from the slowly- growing, low-grade cancer. Many of these men with low-grade cancer are placed on active surveillance (AS), which involves constant monitoring and intervention for risk reclassification, relying increasingly on magnetic resonance imaging (MRI) to detect disease progression, in addition to TRUS-guided biopsies which are the routine clinical standard method to use. This results in a need for new tools to process these images. For this purpose, it is important to have a good TRUS-MR registration so corresponding anatomy can be located accurately between the two. Automatic segmentation of the prostate gland on both modalities reduces some of the challenges of the registration, such as patient motion, tissue deformation, and the time of the procedure. This thesis focuses on the use of deep learning methods, specifically convolutional neural networks (CNNs), for prostate cancer management. Chapters 4 and 5 investigated the use of CNNs for both TRUS and MRI prostate gland segmentation, and reported high segmentation accuracies for both, Dice Score Coefficients (DSC) of 0.89 for TRUS segmentations and DSCs between 0.84-0.89 for MRI prostate gland segmentation using a range of networks. Chapter 5 also investigated the impact of these segmentation scores on more clinically relevant measures, such as MRI-TRUS registration errors and volume measures, showing that a statistically significant difference in DSCs did not lead to a statistically significant difference in the clinical measures using these segmentations. The potential of these algorithms in commercial and clinical systems are summarised and the use of the MRI prostate gland segmentation in the application of radiological prostate cancer progression prediction for AS patients are investigated and discussed in Chapter 8, which shows statistically significant improvements in accuracy when using spatial priors in the form of prostate segmentations (0.63 ± 0.16 vs. 0.82 ± 0.18 when comparing whole prostate MRI vs. only prostate gland region, respectively)

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing
    corecore