6 research outputs found

    Performance improvements of an atmospheric radiative transfer model on GPU-based platform using CUDA

    Get PDF
    Classical applications of Atmospheric Radiative Transfer Model (ARTM) for modelization of absorption coefficient line-by-line on the atmosphere consume large computational time since seconds up to a few minutes depending on the atmospheric characterization chosen. ARTM is used together with Ground- Based or Satellite measurements to retrieve atmospheric parameters such as ozone, water vapour and temperature profiles. Nowadays in the Atmospheric Observatory of Southern Patagonia (OAPA) at the Patagonian City of Río Gallegos have been deployed a Spectral Millimeter Wave Radiometer belonging Nagoya Univ. (Japan) with the aim of retrieve stratospheric ozone profiles between 20-80 Km. Around 2 GBytes of data are recorder by the instrument per day and the ozone profiles are retrieving using one hour integration spectral data, resulting at 24 profiles per day. Actually the data reduction is performed by Laser and Application Research Center (CEILAP) group using the Matlab package ARTS/QPACK2. Using the classical data reduction procedure, the computational time estimated per profile is between 4-5 minutes determined mainly by the computational time of the ARTM and matrix operations. We propose in this work first add a novel scheme to accelerate the processing speed of the ARTM using the powerful multi-threading setup of GPGPU based at Compute Unified Device Architecture (CUDA) and compare it with the existing schemes. Performance of the ARTM has been calculated using various settings applied on a NVIDIA graphic Card GeForce GTX 560 Compute Capability 2.1. Comparison of the execution time between sequential mode, Open-MP and CUDA has been tested in this paper.XV Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Performance improvements of an atmospheric radiative transfer model on GPU-based platform using CUDA

    Get PDF
    Classical applications of Atmospheric Radiative Transfer Model (ARTM) for modelization of absorption coefficient line-by-line on the atmosphere consume large computational time since seconds up to a few minutes depending on the atmospheric characterization chosen. ARTM is used together with Ground- Based or Satellite measurements to retrieve atmospheric parameters such as ozone, water vapour and temperature profiles. Nowadays in the Atmospheric Observatory of Southern Patagonia (OAPA) at the Patagonian City of Río Gallegos have been deployed a Spectral Millimeter Wave Radiometer belonging Nagoya Univ. (Japan) with the aim of retrieve stratospheric ozone profiles between 20-80 Km. Around 2 GBytes of data are recorder by the instrument per day and the ozone profiles are retrieving using one hour integration spectral data, resulting at 24 profiles per day. Actually the data reduction is performed by Laser and Application Research Center (CEILAP) group using the Matlab package ARTS/QPACK2. Using the classical data reduction procedure, the computational time estimated per profile is between 4-5 minutes determined mainly by the computational time of the ARTM and matrix operations. We propose in this work first add a novel scheme to accelerate the processing speed of the ARTM using the powerful multi-threading setup of GPGPU based at Compute Unified Device Architecture (CUDA) and compare it with the existing schemes. Performance of the ARTM has been calculated using various settings applied on a NVIDIA graphic Card GeForce GTX 560 Compute Capability 2.1. Comparison of the execution time between sequential mode, Open-MP and CUDA has been tested in this paper.XV Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Performance improvements of an atmospheric radiative transfer model on GPU-based platform using CUDA

    Get PDF
    Classical applications of Atmospheric Radiative Transfer Model (ARTM) for modelization of absorption coefficient line-by-line on the atmosphere consume large computational time since seconds up to a few minutes depending on the atmospheric characterization chosen. ARTM is used together with Ground- Based or Satellite measurements to retrieve atmospheric parameters such as ozone, water vapour and temperature profiles. Nowadays in the Atmospheric Observatory of Southern Patagonia (OAPA) at the Patagonian City of Río Gallegos have been deployed a Spectral Millimeter Wave Radiometer belonging Nagoya Univ. (Japan) with the aim of retrieve stratospheric ozone profiles between 20-80 Km. Around 2 GBytes of data are recorder by the instrument per day and the ozone profiles are retrieving using one hour integration spectral data, resulting at 24 profiles per day. Actually the data reduction is performed by Laser and Application Research Center (CEILAP) group using the Matlab package ARTS/QPACK2. Using the classical data reduction procedure, the computational time estimated per profile is between 4-5 minutes determined mainly by the computational time of the ARTM and matrix operations. We propose in this work first add a novel scheme to accelerate the processing speed of the ARTM using the powerful multi-threading setup of GPGPU based at Compute Unified Device Architecture (CUDA) and compare it with the existing schemes. Performance of the ARTM has been calculated using various settings applied on a NVIDIA graphic Card GeForce GTX 560 Compute Capability 2.1. Comparison of the execution time between sequential mode, Open-MP and CUDA has been tested in this paper.XV Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    High-performance time-series quantitative retrieval from satellite images on a GPU cluster

    Get PDF
    The quality and accuracy of remote sensing instruments continue to increase, allowing geoscientists to perform various quantitative retrieval applications to observe the geophysical variables of land, atmosphere, ocean, etc. The explosive growth of time-series remote sensing (RS) data over large-scales poses great challenges on managing, processing, and interpreting RS ‘‘Big Data.’’ To explore these time-series RS data efficiently, in this paper, we design and implement a high-performance framework to address the time-consuming time-series quantitative retrieval issue on a graphics processing unit cluster, taking the aerosol optical depth (AOD) retrieval from satellite images as a study case. The presented framework exploits the multilevel parallelism for time-series quantitative RS retrieval to promote efficiency. At the coarse-grained level of parallelism, the AOD time-series retrieval is represented as multidirected acyclic graph workflows and scheduled based on a list-based heuristic algorithm, heterogeneous earliest finish time, taking the idle slot and priorities of retrieval jobs into account. At the fine-grained level, the parallel strategies for the major remote sensing image processing algorithms divided into three categories, i.e., the point or pixel-based operations, the local operations, and the global or irregular operations have been summarized. The parallel framework was implemented with message passing interface and compute unified device architecture, and experimental results with the AOD retrieval case verify the effectiveness of the presented framework.N/

    GPU-Accelerated Multi-Profile Radiative Transfer Model for the Infrared Atmospheric Sounding Interferometer

    No full text

    CACIC 2015 : XXI Congreso Argentino de Ciencias de la Computación. Libro de actas

    Get PDF
    Actas del XXI Congreso Argentino de Ciencias de la Computación (CACIC 2015), realizado en Sede UNNOBA Junín, del 5 al 9 de octubre de 2015.Red de Universidades con Carreras en Informática (RedUNCI
    corecore