8 research outputs found

    GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring

    Get PDF
    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined

    Optimizing Hyperspectral Image Processing with GPUs and Accelerators

    Get PDF
    Trabajo de Fin de Grado en Ingeniería de Computadores, Facultad de Informática UCM, Departamento de Arquitectura de Computadores y Automática, Curso 2021/2022.Durante siglos se han creado diversas teorías sobre el planeta en el que vivimos y con el desarrollo de las tecnologías se han podido conocer muchas de sus características, se han ido mejorando las técnicas que se utilizan hasta dar con las que se encuentran ahora, donde se analiza a través de satélites obteniendo imágenes hiperespectrales para un análisis píxel a píxel de los materiales que contiene. El análisis de imágenes hiperespectrales es una tarea ardua, al captar el material de el que se compone la imagen a través de un solo píxel se dificulta cuando ese píxel tiene más de un material, llamado el problema de la mezcla espectral, por ello se hace un desmezclado espectral a través de una cadena de procesamiento. Dependiendo de la imagen, algoritmos seleccionados para la cadena de desmezclado espectral y el factor tecnológico puede dar diversos rendimientos. La cadena de desmezcaldo espectral tiene tres fases, la primera fase se trata de obtener el numero de materiales o endmembers que tiene la imagen hiperespectral, la segunda fase se extrae los diferentes materiales que componen la imagen hiperespectral y la tercera fase se saca un mapa de abundancia de cada material. Se han seleccionado en el mismo orden los algoritmos VD, VCA e ISRA para completar la cadena de desmezclado espectral. En este proyecto se han implementado todas las fases de forma paralela, contribuyendo a la optimización de estos algoritmos de procesado de imágenes hiperespectrales en distintos paradigmas de programación paralela, como son: OpenACC, OpenMP y SYCL (oneAPI). Se utilizan este tipo de paradigmas ya que otro de los objetivos de este trabajo es poder ejecutar todos los algoritmos en sistemas heterogéneos, con todos los resultados obtenidos se hace una comparativa de rendimiento buscando la mejor combinación entre estos.For centuries, various theories have been created about the planet we live in, and with the development of technologies, many of its characteristics have been discovered. The techniques used for this matter have been improved and perfected, making satellites able to take hyperspectral images for a pixel-by-pixel analysis of the materials they contain. The analysis of hyperspectral images is an arduous task, capturing the material from which an image is composed through a single pixel becomes difficult when that pixel has more than one material, this is known as the problem of spectral mixing. For this reason, spectral unmixing is done through a chain of processing. Depending on the image, the selected algorithms for the spectral unmixing chain and the technological factor can lead to different performance results. The spectral unmixing chain has three phases: the first phase obtains the number of materials (or endmembers) present in the hyperspectral image, the second phase extracts what different materials that make up the hyperspectral image, and the third phase generates an abundance map of each material in the hyperspectral image. The VD, VCA and ISRA algorithms have been selected in that order to create the spectral unmixing chain. In this project, all the phases have been implemented using parallel computing, contributing to the optimization of these hyperspectral image processing algorithms in different parallel programming paradigms, such as: OpenACC, OpenMP and SYCL (oneAPI). This type of paradigm is used since one of the objectives of this work is to be able to execute all the algorithms in heterogeneous systems. With all the results obtained, a performance comparison is made looking for the best combination between them.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Molecular Imaging

    Get PDF
    The present book gives an exceptional overview of molecular imaging. Practical approach represents the red thread through the whole book, covering at the same time detailed background information that goes very deep into molecular as well as cellular level. Ideas how molecular imaging will develop in the near future present a special delicacy. This should be of special interest as the contributors are members of leading research groups from all over the world

    A fast parallel hyperspectral coded aperture algorithm for compressive sensing using OpenCL

    No full text
    In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions
    corecore