368 research outputs found
Spectral-spatial classification of n-dimensional images in real-time based on segmentation and mathematical morphology on GPUs
The objective of this thesis is to develop efficient schemes for spectral-spatial n-dimensional image
classification. By efficient schemes, we mean schemes that produce good classification results in
terms of accuracy, as well as schemes that can be executed in real-time on low-cost computing
infrastructures, such as the Graphics Processing Units (GPUs) shipped in personal computers. The
n-dimensional images include images with two and three dimensions, such as images coming from
the medical domain, and also images ranging from ten to hundreds of dimensions, such as the multiand
hyperspectral images acquired in remote sensing.
In image analysis, classification is a regularly used method for information retrieval in areas such as
medical diagnosis, surveillance, manufacturing and remote sensing, among others. In addition, as
the hyperspectral images have been widely available in recent years owing to the reduction in the
size and cost of the sensors, the number of applications at lab scale, such as food quality control, art
forgery detection, disease diagnosis and forensics has also increased. Although there are many
spectral-spatial classification schemes, most are computationally inefficient in terms of execution
time. In addition, the need for efficient computation on low-cost computing infrastructures is
increasing in line with the incorporation of technology into everyday applications.
In this thesis we have proposed two spectral-spatial classification schemes: one based on
segmentation and other based on wavelets and mathematical morphology. These schemes were
designed with the aim of producing good classification results and they perform better than other
schemes found in the literature based on segmentation and mathematical morphology in terms of
accuracy. Additionally, it was necessary to develop techniques and strategies for efficient GPU
computing, for example, a block–asynchronous strategy, resulting in an efficient implementation on
GPU of the aforementioned spectral-spatial classification schemes. The optimal GPU parameters
were analyzed and different data partitioning and thread block arrangements were studied to exploit
the GPU resources. The results show that the GPU is an adequate computing platform for on-board
processing of hyperspectral information
Efficient multitemporal change detection techniques for hyperspectral images on GPU
Hyperspectral images contain hundreds of reflectance values for each pixel.
Detecting regions of change in multiple hyperspectral images of the same
scene taken at different times is of widespread interest for a large number of
applications. For remote sensing, in particular, a very common application is
land-cover analysis. The high dimensionality of the hyperspectral images
makes the development of computationally efficient processing schemes
critical. This thesis focuses on the development of change detection
approaches at object level, based on supervised direct multidate
classification, for hyperspectral datasets. The proposed approaches improve
the accuracy of current state of the art algorithms and their projection onto
Graphics Processing Units (GPUs) allows their execution in real-time
scenarios
A Novel Methodology for Calculating Large Numbers of Symmetrical Matrices on a Graphics Processing Unit: Towards Efficient, Real-Time Hyperspectral Image Processing
Hyperspectral imagery (HSI) is often processed to identify targets of interest. Many of the quantitative analysis techniques developed for this purpose mathematically manipulate the data to derive information about the target of interest based on local spectral covariance matrices. The calculation of a local spectral covariance matrix for every pixel in a given hyperspectral data scene is so computationally intensive that real-time processing with these algorithms is not feasible with today’s general purpose processing solutions. Specialized solutions are cost prohibitive, inflexible, inaccessible, or not feasible for on-board applications.
Advances in graphics processing unit (GPU) capabilities and programmability offer an opportunity for general purpose computing with access to hundreds of processing cores in a system that is affordable and accessible. The GPU also offers flexibility, accessibility and feasibility that other specialized solutions do not offer. The architecture for the NVIDIA GPU used in this research is significantly different from the architecture of other parallel computing solutions. With such a substantial change in architecture it follows that the paradigm for programming graphics hardware is significantly different from traditional serial and parallel software development paradigms.
In this research a methodology for mapping an HSI target detection algorithm to the NVIDIA GPU hardware and Compute Unified Device Architecture (CUDA) Application Programming Interface (API) is developed. The RX algorithm is chosen as a representative stochastic HSI algorithm that requires the calculation of a spectral covariance matrix. The developed methodology is designed to calculate a local covariance matrix for every pixel in the input HSI data scene.
A characterization of the limitations imposed by the chosen GPU is given and a path forward toward optimization of a GPU-based method for real-time HSI data processing is defined
Commodity Computing Clusters at Goddard Space Flight Center
The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need. Therefore, Goddard scientists need as much computing resources that are available for the provided funds. This is exemplified in the following brief history of low-cost high-performance computing at GSFC
GPU Accelerated FFT-Based Registration of Hyperspectral Scenes
Registration is a fundamental previous task in many applications of hyperspectrometry. Most of the algorithms developed are designed to work with RGB images and ignore the execution time. This paper presents a phase correlation algorithm on GPU to register two remote sensing hyperspectral images. The proposed algorithm is based on principal component analysis, multilayer fractional Fourier transform, combination of log-polar maps, and peak processing. It is fully developed in CUDA for NVIDIA GPUs. Different techniques such as the efficient use of the memory hierarchy, the use of CUDA libraries, and the maximization of the occupancy have been applied to reach the best performance on GPU. The algorithm is robust achieving speedups in GPU of up to 240.6×This work was supported in part by the Consellería de Cultura, Educacion e Ordenación Universitaria under Grant GRC2014/008 and Grant ED431G/08 and in part by the Ministry of Education, Culture and Sport, Government of Spain under Grant TIN2013-41129-P and Grant TIN2016-76373-P. Both are cofunded by the European Regional Development Fund.
The work of A. Ordóñez was supported by the Ministry of Education, Culture and Sport, Government of Spain, under an FPU Grant FPU16/03537S
Techniques for the extraction of spatial and spectral information in the supervised classification of hyperspectral imagery for land-cover applications
The objective of this PhD thesis is the development of spatialspectral
information extraction techniques for supervised
classification tasks, both by means of classical models and
those based on deep learning, to be used in the classification
of land use or land cover (LULC) multi- and hyper-spectral
images obtained by remote sensing. The main goal is the
efficient application of these techniques, so that they are able
to obtain satisfactory classification results with a low use of
computational resources and low execution time
CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it
is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent
estimation of fish abundance and fish species identification is highly desirable for planning sustainable fisheries. Main hurdles
in analysing acoustic images are the presence of speckle noise and the vast amount of acoustic data. This paper presents a level
set formulation for simultaneous fish reconstruction and noise suppression from raw acoustic images. Despite the presence of
speckle noise blobs, actual fish intensity values can be distinguished by extremely high values, varying exponentially from the
background. Edge detection generally gives excessive false edges that are not reliable. Our approach to reconstruction is based
on level set evolution using Mumford-Shah segmentation functional that does not depend on edges in an image. We use the
implicit function in conjunction with the image to robustly estimate a threshold for suppressing noise in the image by solving
a second differential equation. We provide details of our estimation of suppressing threshold and show its convergence as the
evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA’s CUDA framework to
handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes
Performance-Aware High-Performance Computing for Remote Sensing Big Data Analytics
The incredible increase in the volume of data emerging along with recent technological developments has made the analysis processes which use traditional approaches more difficult for many organizations. Especially applications involving subjects that require timely processing and big data such as satellite imagery, sensor data, bank operations, web servers, and social networks require efficient mechanisms for collecting, storing, processing, and analyzing these data. At this point, big data analytics, which contains data mining, machine learning, statistics, and similar techniques, comes to the help of organizations for end-to-end managing of the data. In this chapter, we introduce a novel high-performance computing system on the geo-distributed private cloud for remote sensing applications, which takes advantages of network topology, exploits utilization and workloads of CPU, storage, and memory resources in a distributed fashion, and optimizes resource allocation for realizing big data analytics efficiently
- …