605 research outputs found

    A CNN based hybrid approach towards automatic image registration

    Get PDF
    Image registration is a key component of spatial analyses that involve different data sets of the same area. Automatic approaches in this domain have witnessed the application of several intelligent methodologies over the past decade; however accuracy of these approaches have been limited due to the inability to properly model shape as well as contextual information. In this paper, we investigate the possibility of an evolutionary computing based framework towards automatic image registration. Cellular Neural Network has been found to be effective in improving feature matching as well as resampling stages of registration, and complexity of the approach has been considerably reduced using corset optimization. CNN-prolog based approach has been adopted to dynamically use spectral and spatial information for representing contextual knowledge. The salient features of this work are feature point optimisation, adaptive resampling and intelligent object modelling. Investigations over various satellite images revealed that considerable success has been achieved with the procedure. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling

    Clearing the Clouds: Extracting 3D information from amongst the noise

    Get PDF
    Advancements permitting the rapid extraction of 3D point clouds from a variety of imaging modalities across the global landscape have provided a vast collection of high fidelity digital surface models. This has created a situation with unprecedented overabundance of 3D observations which greatly outstrips our current capacity to manage and infer actionable information. While years of research have removed some of the manual analysis burden for many tasks, human analysis is still a cornerstone of 3D scene exploitation. This is especially true for complex tasks which necessitate comprehension of scale, texture and contextual learning. In order to ameliorate the interpretation burden and enable scientific discovery from this volume of data, new processing paradigms are necessary to keep pace. With this context, this dissertation advances fundamental and applied research in 3D point cloud data pre-processing and deep learning from a variety of platforms. We show that the representation of 3D point data is often not ideal and sacrifices fidelity, context or scalability. First ground scanning terrestrial LIght Detection And Ranging (LiDAR) models are shown to have an inherent statistical bias, and present a state of the art method for correcting this, while preserving data fidelity and maintaining semantic structure. This technique is assessed in the dense canopy of Micronesia, with our technique being the best at retaining high levels of detail under extreme down-sampling (\u3c 1%). Airborne systems are then explored with a method which is presented to pre-process data to preserve a global contrast and semantic content in deep learners. This approach is validated with a building footprint detection task from airborne imagery captured in Eastern TN from the 3D Elevation Program (3DEP), our approach was found to achieve significant accuracy improvements over traditional techniques. Finally, topography data spanning the globe is used to assess past and previous global land cover change. Utilizing Shuttle Radar Topography Mission (SRTM) and Moderate Resolution Imaging Spectroradiometer (MODIS) data, paired with the airborne preprocessing technique described previously, a model for predicting land-cover change from topography observations is described. The culmination of these efforts have the potential to enhance the capabilities of automated 3D geospatial processing, substantially lightening the burden of analysts, with implications improving our responses to global security, disaster response, climate change, structural design and extraplanetary exploration

    New techniques for the automatic registration of microwave and optical remotely sensed images

    Get PDF
    Remote sensing is a remarkable tool for monitoring and mapping the land and ocean surfaces of the Earth. Recently, with the launch of many new Earth observation satellites, there has been an increase in the amount of data that is being acquired, and the potential for mapping is greater than ever before. Furthermore, sensors which are currently operational are acquiring data in many different parts of the electromagnetic spectrum. It has long been known that by combining images that have been acquired at different wavelengths, or at different times, the ability to detect and recognise features on the ground is greatly increased. This thesis investigates the possibilities for automatically combining radar and optical remotely sensed images. The process of combining images, known as data integration, is a two step procedure: geometric integration (image registration) and radiometric integration (data fusion). Data fusion is essentially an automatic procedure, but the problems associated with automatic registration of multisource images have not, in general, been resolved. This thesis proposes a method of automatic image registration based on the extraction and matching of common features which are visible in both images. The first stage of the registration procedure uses patches as the matching primitives in order to determine the approximate alignment of the images. The second stage refines the registration results by matching edge features. Throughout the development of the proposed registration algorithm, reliability, robustness and automation were always considered priorities. Tests with both small images (512x512 pixels) and full scene images showed that the algorithm could successfully register images to an acceptable level of accuracy

    Multi-Fusion algorithms for Detecting Land Surface Pattern Changes Using Multi-High Spatial Resolution Images and Remote Sensing Analysis

    Get PDF
    Producing accurate Land-Use and Land-Cover (LU/LC) maps using low-spatial-resolution images is a difficult task. Pan-sharpening is crucial for estimating LU/LC patterns. This study aimed to identify the most precise procedure for estimating LU/LC by adopting two fusion approaches, namely Color Normalized Brovey (BM) and Gram-Schmidt Spectral Sharpening (GS), on high-spatial-resolution Multi-sensor and Multi-spectral images, such as (1) the Unmanned Aerial Vehicle (UAV) system, (2) the WorldView-2 satellite system, and (3) low-spatial-resolution images like the Sentinel-2 satellite, to generate six levels of fused images with the three original multi-spectral images. The Maximum Likelihood method (ML) was used for classifying all nine images. A confusion matrix was used to evaluate the accuracy of each single classified image. The obtained results were statistically compared to determine the most reliable, accurate, and appropriate LU/LC map and procedure. It was found that applying GS to the fused image, which integrated WorldView-2 and Sentinel-2 satellite images and was classified by the ML method, produced the most accurate results. This procedure has an overall accuracy of 88.47% and a kappa coefficient of 0.85. However, the overall accuracies of the three classified multispectral images range between 86.84% to 76.49%. Furthermore, the accuracy assessment of the fused images by the Brovey method and the rest of the GS method and classified by the ML method ranges between 85.75% to 76.68%. This proposed procedure shows a lot of promise in the academic sphere for mapping LU/LC. Previous researchers have mostly used satellite images or datasets with similar spatial and spectral resolution, at least for tropical areas like the study area of this research, to detect land surface patterns. However, no one has previously investigated and examined the use and application of different datasets that have different spectral and spatial resolutions and their accuracy for mapping LU/LC. This study has successfully adopted different datasets provided by different sensors with varying spectral and spatial levels to investigate this. Doi: 10.28991/ESJ-2023-07-04-013 Full Text: PD

    The use of remotely sensed data and polish NFI plots for prediction of growing stock volume using different predictive methods

    Get PDF
    Forest growing stock volume (GSV) is an important parameter in the context of forest resource management. National Forest Inventories (NFIs) are routinely used to estimate forest parameters, including GSV, for national or international reporting. Remotely sensed data are increasingly used as a source of auxiliary information for NFI data to improve the spatial precision of forest parameter estimates. In this study, we combine data from the NFI in Poland with satellite images of Landsat 7 and 3D point clouds collected with airborne laser scanning (ALS) technology to develop predictive models of GSV. We applied an area-based approach using 13,323 sample plots measured within the second cycle of the NFI in Poland (2010–2014) with poor positional accuracy from several to 15 m. Four different predictive approaches were evaluated: multiple linear regression, k-Nearest Neighbours, Random Forest and Deep Learning fully connected neural network. For each of these predictive methods, three sets of predictors were tested: ALS-derived, Landsat-derived and a combination of both. The developed models were validated at the stand level using field measurements from 360 reference forest stands. The best accuracy (RMSE% = 24.2%) and lowest systematic error (bias% = −2.2%) were obtained with a deep learning approach when both ALS- and Landsat-derived predictors were used. However, the differences between the evaluated predictive approaches were marginal when using the same set of predictor variables. Only a slight increase in model performance was observed when adding the Landsat-derived predictors to the ALS-derived ones. The obtained results showed that GSV can be predicted at the stand level with relatively low bias and reasonable accuracy for coniferous species, even using field sample plots with poor positional accuracy for model development. Our findings are especially important in the context of GSV prediction in areas where NFI data are available but the collection of accurate positions of field plots is not possible or justified because of economic reasons

    Applicability of Artificial Neural Network for Automatic Crop Type Classification on UAV-Based Images

    Get PDF
    Recent advances in optical remote sensing, especially with the development of machine learning models have made it possible to automatically classify different crop types based on their unique spectral characteristics. In this article, a simple feed-forward artificial neural network (ANN) was implemented for the automatic classification of various crop types. A DJI Mavic air drone was used to simultaneously collect about 549 images of a mixed-crop farmland belonging to Federal University of Technology Minna, Nigeria. The images were annotated and the ANN algorithm was implemented using custom-designed Python programming scripts with libraries such as NumPy, Label box, and Segmentation Mask, for the classification. The algorithm was designed to automatically classify maize, rice, soya beans, groundnut, yam and a non-crop feature into different land spectral classes. The model training performance, using 70% of the dataset, shows that the loss curve flattened down with minimal over-fitting, showing that the model was improving as it trained. Finally, the accuracy of the automatic crop-type classification was evaluated with the aid of the recorded loss function and confusion matrix, and the result shows that the implemented ANN gave an overall training classification accuracy of 87.7% from the model and an overall accuracy of 0.9393 as computed from the confusion matrix, which attests to the robustness of ANN when implemented on high-resolution image data for automatic classification of crop types in a mixed farmland. The overall accuracy, including the user accuracy, proved that only a few images were incorrectly classified, which demonstrated that the errors of omission and commission were minimal

    Hybrid Image Classification Technique for Land-Cover Mapping in the Arctic Tundra, North Slope, Alaska

    Get PDF
    Remotely sensed image classification techniques are very useful to understand vegetation patterns and species combination in the vast and mostly inaccessible arctic region. Previous researches that were done for mapping of land cover and vegetation in the remote areas of northern Alaska have considerably low accuracies compared to other biomes. The unique arctic tundra environment with short growing season length, cloud cover, low sun angles, snow and ice cover hinders the effectiveness of remote sensing studies. The majority of image classification research done in this area as reported in the literature used traditional unsupervised clustering technique with Landsat MSS data. It was also emphasized by previous researchers that SPOT/HRV-XS data lacked the spectral resolution to identify the small arctic tundra vegetation parcels. Thus, there is a motivation and research need to apply a new classification technique to develop an updated, detailed and accurate vegetation map at a higher spatial resolution i.e. SPOT-5 data. Traditional classification techniques in remotely sensed image interpretation are based on spectral reflectance values with an assumption of the training data being normally distributed. Hence it is difficult to add ancillary data in classification procedures to improve accuracy. The purpose of this dissertation was to develop a hybrid image classification approach that effectively integrates ancillary information into the classification process and combines ISODATA clustering, rule-based classifier and the Multilayer Perceptron (MLP) classifier which uses artificial neural network (ANN). The main goal was to find out the best possible combination or sequence of classifiers for typically classifying tundra type vegetation that yields higher accuracy than the existing classified vegetation map from SPOT data. Unsupervised ISODATA clustering and rule-based classification techniques were combined to produce an intermediate classified map which was used as an input to a Multilayer Perceptron (MLP) classifier. The result from the MLP classifier was compared to the previous classified map and for the pixels where there was a disagreement for the class allocations, the class having a higher kappa value was assigned to the pixel in the final classified map. The results were compared to standard classification techniques: simple unsupervised clustering technique and supervised classification with Feature Analyst. The results indicated higher classification accuracy (75.6%, with kappa value of .6840) for the proposed hybrid classification method than the standard classification techniques: unsupervised clustering technique (68.3%, with kappa value of 0.5904) and supervised classification with Feature Analyst (62.44%, with kappa value of 0.5418). The results were statistically significant at 95% confidence level
    corecore