494 research outputs found

    Spectral-spatial classification of n-dimensional images in real-time based on segmentation and mathematical morphology on GPUs

    Get PDF
    The objective of this thesis is to develop efficient schemes for spectral-spatial n-dimensional image classification. By efficient schemes, we mean schemes that produce good classification results in terms of accuracy, as well as schemes that can be executed in real-time on low-cost computing infrastructures, such as the Graphics Processing Units (GPUs) shipped in personal computers. The n-dimensional images include images with two and three dimensions, such as images coming from the medical domain, and also images ranging from ten to hundreds of dimensions, such as the multiand hyperspectral images acquired in remote sensing. In image analysis, classification is a regularly used method for information retrieval in areas such as medical diagnosis, surveillance, manufacturing and remote sensing, among others. In addition, as the hyperspectral images have been widely available in recent years owing to the reduction in the size and cost of the sensors, the number of applications at lab scale, such as food quality control, art forgery detection, disease diagnosis and forensics has also increased. Although there are many spectral-spatial classification schemes, most are computationally inefficient in terms of execution time. In addition, the need for efficient computation on low-cost computing infrastructures is increasing in line with the incorporation of technology into everyday applications. In this thesis we have proposed two spectral-spatial classification schemes: one based on segmentation and other based on wavelets and mathematical morphology. These schemes were designed with the aim of producing good classification results and they perform better than other schemes found in the literature based on segmentation and mathematical morphology in terms of accuracy. Additionally, it was necessary to develop techniques and strategies for efficient GPU computing, for example, a block–asynchronous strategy, resulting in an efficient implementation on GPU of the aforementioned spectral-spatial classification schemes. The optimal GPU parameters were analyzed and different data partitioning and thread block arrangements were studied to exploit the GPU resources. The results show that the GPU is an adequate computing platform for on-board processing of hyperspectral information

    Efficient multitemporal change detection techniques for hyperspectral images on GPU

    Get PDF
    Hyperspectral images contain hundreds of reflectance values for each pixel. Detecting regions of change in multiple hyperspectral images of the same scene taken at different times is of widespread interest for a large number of applications. For remote sensing, in particular, a very common application is land-cover analysis. The high dimensionality of the hyperspectral images makes the development of computationally efficient processing schemes critical. This thesis focuses on the development of change detection approaches at object level, based on supervised direct multidate classification, for hyperspectral datasets. The proposed approaches improve the accuracy of current state of the art algorithms and their projection onto Graphics Processing Units (GPUs) allows their execution in real-time scenarios

    Massively parallel landscape-evolution modelling using general purpose graphical processing units

    Get PDF
    As our expectations of what computer systems can do and our ability to capture data improves, the desire to perform ever more computationally intensive tasks increases. Often these tasks, comprising vast numbers of repeated computations, are highly interdependent on each other – a closely coupled problem. The process of Landscape-Evolution Modelling is an example of such a problem. In order to produce realistic models it is necessary to process landscapes containing millions of data points over time periods extending up to millions of years. This leads to non-tractable execution times, often in the order of years. Researchers therefore seek multiple orders of magnitude reduction in the execution time of these models. The massively parallel programming environment offered through General Purpose Graphical Processing Units offers the potential for multiple orders of magnitude speedup in code execution times. In this paper we demonstrate how the time dominant parts of a Landscape-Evolution Model can be recoded for a massively parallel architecture providing two orders of magnitude reduction in execution time

    Medical image segmentation using GPU-accelerated variational level set methods

    Get PDF
    Medical imaging techniques such as CT, MRI and x-ray imaging are a crucial component of modern diagnostics and treatment. As a result, many automated methods involving digital image processing have been developed for the medical field. Image segmentation is the process of finding the boundaries of one or more objects or regions of interest in an image. This thesis focuses on accelerating image segmentation for the localization of cancerous lung nodules in two-dimensional radiographs. This process is used during radiation treatment, to minimize radiation exposure to healthy tissue. The variational level set method is used to segment out the lung nodules. This method represents an evolving segmentation boundary as the zero level set of a function on a two-dimensional grid. The calculus of variations is employed to minimize a set of energy equations and find the nodule\u27s boundary. Although this approach is flexible, it comes at significant computational cost, and is not able to run in real time on a general purpose workstation. Modern graphics processing units offer a high performance platform for accelerating the variational level set method, which, in its simplest sense, consists of a large number of parallel computations over a grid. NVIDIA\u27s CUDA framework for general purpose computation on GPUs was used in conjunction with three different NVIDIA GPUs to reduce processing time by 11x--20x. This speedup was sufficient to allow real-time segmentation at moderate cost

    Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines

    Full text link
    In this paper, we address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50x and 85x with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively.Comment: 37 pages, 16 figure
    • …
    corecore