21 research outputs found

    FPGA Acceleration of Domain-specific Kernels via High-Level Synthesis

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Técnicas de compresión de imágenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en día en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y además facilita el trabajo científico. Una de las preocupaciones recurrentes en las últimas décadas ha sido la sotenibilidad del planeta, y cómo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satélites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades áreas para análisis más precisos...Fac. de InformáticaTRUEunpu

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Machine-Learning Based Microwave Sensing: A Case Study for the Food Industry

    Get PDF
    Despite the meticulous attention of food industries to prevent hazards in packaged goods, some contaminants may still elude the controls. Indeed, standard methods, like X-rays, metal detectors and near-infrared imaging, cannot detect lowdensity materials. Microwave sensing is an alternative method that, combined with machine learning classifiers, can tackle these deficiencies. In this paper we present a design methodology applied to a case study in the food sector. Specifically, we offer a complete flow from microwave dataset acquisition to deployment of the classifiers on real-time hardware and we show the effectiveness of this method in terms of detection accuracy. In the case study, we apply the machine-learning based microwave sensing approach to the case of food jars flowing at high speed on a conveyor belt. First, we collected a dataset from hazelnutcocoa spread jars which were uncontaminated or contaminated with various intrusions, including low-density plastics. Then, we performed a design space exploration to choose the best MLPs as binary classifiers, which resulted to be exceptionally accurate. Finally, we selected the two most light-weight models for implementation on both an ARM-based CPU and an FPGA SoC, to cover a wide range of possible latency requirements, from loose to strict, to detect contaminants in real-time. The proposed design flow facilitates the design of the FPGA accelerator that might be required to meet the timing requirements by using a high-level approach, which might be suited for the microwave domain experts without specific digital hardware skills

    Synthetic Aperture Radar Image Formation and Processing on an MPSoC

    Get PDF
    Satellite remote sensing acquisitions are usually processed after downlink to a ground station. The satellite travel time to the ground station adds to the total latency, increasing the time until a user can obtain the processing results. Performing the processing and information extraction onboard of the satellite can significantly reduce this time. In this study, synthetic aperture radar (SAR) image formation as well as ship detection and extreme weather detection were implemented in a multiprocessor system on a chip (MPSoC). Processing steps with high computational complexity were ported to run on the programmable logic (PL), achieving significant speed-up by implementing a high degree of parallelization and pipelining as well as efficient memory accesses. Steps with lower complexity run on the processing system (PS), allowing for higher flexibility and reducing the need for resources in the PL. The achieved processing times for an area covering 375 km2 were approximately 4 s for image formation, 16 s for ship detection, and 31 s for extreme weather detection. These evelopments combined with new downlink concepts for low-rate information data streams show that the provision of satellite remote sensing results to end users in less than 5 min after acquisition is possible using an adequately equipped satellite

    Design Techniques of Parallel Accelerator Architectures for Real-Time Processing of Learning Algorithms

    Get PDF
    H παρούσα διδακτορική διατριβή έχει ως βασικό αντικείμενο μελέτης τα Συνελικτικά Νευρωνικά Δίκτυα (Convolutional Neural Networks - CNNs) για εφαρμογές υπολογιστικής όρασης (computer vision) και συγκεκριμένα εστιάζει στην εκτέλεση της διαδικασίας της εξαγωγής συμπερασμάτων των CNNs (CNN inference) σε ενσωματωμένους επιταχυντές κατάλληλους για εφαρμογές της υπολογιστικής των παρυφών (edge computing). Ο σκοπός της διατριβής είναι να αντιμετωπίσει τις τρέχουσες προκλήσεις σχετικά με τη βελτιστοποίηση των CNNs προκειμένου αυτά να υλοποιηθούν σε edge computing πλατφόρμες, καθώς και τις προκλήσεις στο πεδίο των τεχνικών σχεδίασης αρχιτεκτονικών επιταχυντών για CNNs. Προς αυτή την κατεύθυνση, η παρούσα διατριβή επικεντρώνεται σε διαφορετικές εφαρμογές βαθιάς μάθησης (deep learning), συμπεριλαμβανομένης της επεξεργασίας εικόνων σε δορυφόρους και της πρόβλεψης ηλιακής ακτινοβολίας από εικόνες. Στις παραπάνω εφαρμογές, η διατριβή συμβάλλει σε τέσσερα διακριτά προβλήματα στα πεδία της βελτιστοποίησης CNNs και της σχεδίασης επιταχυντών CNNs. Αρχικά, η διατριβή συνεισφέρει στην υπάρχουσα βιβλιογραφία σχετικά με τεχνικές επεξεργασίας εικόνας, βασισμένες στα CNNs, για την εκτίμηση και πρόβλεψη ηλιακής ακτινοβολίας. Στα πλαίσια της διατριβής, προτείνεται μια μέθοδος επεξεργασίας εικόνας η οποία βασίζεται στον ακριβή εντοπισμό του Ήλιου σε εικόνες του ουρανού, χρησιμοποιώντας τις συντεταγμένες του Ήλιου και τις εξισώσεις του fisheye φακού της κάμερας λήψης εικόνων του ουρανού. Όταν η προτεινόμενη μέθοδος εφαρμόζεται σε φωτογραφίες του ουρανού πριν από την επεξεργασία τους από τα CNNs, τα αποτελέσματα από την εκτεταμένη μελέτη που διενεργεί η διατριβή, δείχνουν πως μπορεί να βελτιώσει την ακρίβεια των τιμών ακτινοβολίας που παράγουν τα CNNs σε όλες τις περιπτώσεις και με μικρή μόνο αύξηση στο πλήθος των υπολογισμών των CNNs. Στη συνέχεια, η διδακτορική διατριβή επικεντρώνεται στην κατάτμηση εικόνων βασισμένη στη βαθιά μάθηση, με στόχο τον εντοπισμό σύννεφων από δορυφορικές εικόνες σε εφαρμογές επεξεργασίας δεδομένων σε δορυφόρους. Πιο συγκεκριμένα, στα πλαίσια της διατριβής προτείνεται μια αρχιτεκτονική μοντέλου CNN περιορισμένων υπολογιστικών απαιτήσεων, βασισμένη στην αρχιτεκτονική U-Net, η οποία στοχεύει σε μια βελτιωμένη αναλογία ανάμεσα στο μέγεθος του μοντέλου και στις επιδόσεις του στη δυαδική κατάτμηση της εικόνας. Το προτεινόμενο μοντέλο εκμεταλλεύεται πλήθος τεχνικών CNNs προκειμένου να μειώσει το πλήθος των παραμέτρων και πράξεων που απαιτείται για την εκτέλεση του μοντέλου, αλλά ταυτόχρονα να πετυχαίνει ικανοποιητική ακρίβεια αποτελεσμάτων. Η διατριβή διενεργεί μια μελέτη ανάμεσα σε CNN μοντέλα της βιβλιογραφίας για εντοπισμό σύννεφων που έχουν αξιολογηθεί στα ίδια δεδομένα με το προτεινόμενο μοντέλο, και έτσι αναδεικνύει τα προτερήματά του. Επιπλέον, η διδακτορική διατριβή στοχεύει στην αποδοτική υλοποίηση του inference των CNNs επεξεργασίας εικόνας σε ενσωματωμένους επιταχυντές κατάλληλους για εφαρμογές edge computing. Για τον σκοπό αυτό, η διατριβή επιλέγει τα Field-Programmable Gate Arrays (FPGAs) για την επιτάχυνση των CNNs και συνεισφέρει τις λεπτομέρειες της μεθοδολογίας ανάπτυξης που υιοθετήθηκε και η οποία βασίζεται στο εργαλείο Xilinx Vitis AI. Πέρα από τη μελέτη των δυνατοτήτων του Vitis AI, όπως των προχωρημένων τεχνικών κβάντισης των μοντέλων, η διατριβή παρουσιάζει επιπλέον και μια προσέγγιση επιτάχυνσης για την επιτάχυνση των επιμέρους διεργασιών μιας ολοκληρωμένης εργασίας μηχανικής όρασης η οποία εκμεταλλεύεται τους ετερογενείς πόρους του FPGA. Τα αποτελέσματα χρόνων εκτέλεσης και διεκπεραιωτικότητας (throughput) των CNNs τόσο για τη δυαδική κατάτμηση εικόνων για εντοπισμό σύννεφων όσο και για την εκτίμηση ηλιακής ακτινοβολίας από εικόνες, στο FPGA, αναδεικνύουν τις δυνατότητες επεξεργασίας σε πραγματικό χρόνο του επιταχυντή. Τέλος, η διδακτορική διατριβή συνεισφέρει τη σχεδίαση ενός συστήματος διεπαφής, υψηλών επιδόσεων και με ανοχή στα σφάλματα, για την αμφίδρομη μεταφορά εικόνων ανάμεσα σε ενσωματωμένους επιταχυντές βαθιάς μάθησης, στα πλαίσια υπολογιστικών αρχιτεκτονικών για επεξεργασία δεδομένων σε δορυφόρους. Το σύστημα διεπαφής αναπτύχθηκε για την επικοινωνία ανάμεσα σε ένα FPGA και τον επιταχυντή Intel Movidius Myriad 2 και η εκτεταμένη διαδικασία επαλήθευσης του συστήματος, τόσο σε εμπορικά διαθέσιμες όσο και σε πρωτότυπες πλατφόρμες, έδειξε πως αυτό μπορεί να επιτύχει μέχρι και 2.4 Gbps αμφίδρομους ρυθμούς μετάδοσης δεδομένων εικόνων.The current doctoral thesis focuses on Convolutional Neural Networks (CNNs) for computer vision applications and particularly on the deployment of the inference process of CNNs to embedded accelerators suitable for edge computing. The objective of the thesis is to address several challenges regarding the optimization techniques of CNNs towards their edge deployment as well as challenges in the field of CNN accelerator architectures design techniques. In this direction, the thesis focuses on different deep learning applications, including on-board payload data processing as well as solar irradiance forecasting, and makes distinct contributions to four different challenges in the fields of CNN optimization and CNN accelerators design. First, the thesis contributes to the existing literature regarding image processing techniques and deep learning-based image regression for solar irradiance estimation and forecasting. It proposes an image processing method which is based on accurate sun localization in sky images and which utilizes the solar angles and the mapping functions of the lens of the sky imager camera. When the proposed method is applied to the sky images before these are processed by the image regression CNNs, the results from the extensive study that the thesis conducts, show that the method can improve the accuracy of the irradiance values that the CNNs produce in all cases by introducing only minimal computational overhead. Next, the thesis focuses on the task of deep learning-based semantic segmentation in order to enable cloud detection from satellite imagery in on-board payload data processing applications. In particular, the thesis proposes a lightweight CNN model architecture, based on the U-Net architecture, which aims at providing an improved trade-off between model size and binary semantic segmentation performance. The proposed model utilizes several CNN techniques in order to reduce the number of parameters and operations required for the inference but at the same time maintain satisfying performance. The thesis conducts a study among CNN models for cloud detection, which are evaluated on the same test dataset as the proposed model, and thus showcases the advantages of the proposed model. Then, the thesis targets the efficient porting of the inference process of image processing CNNs to edge-oriented embedded accelerator devices. The thesis opts for CNN acceleration based on Field-Programmable Gate Arrays (FPGAs) and contributes the adopted development flow which utilizes the Xilinx Vitis AI framework. Apart from exploring the capabilities of Vitis AI, including its advanced quantization solutions, the thesis also showcases an acceleration approach for accelerating different processes of a single computer vision task by taking advantage of the heterogeneous resources of the FPGA. The execution time and throughput results of the CNN models, for the tasks of binary semantic segmentation for cloud detection as well as image regression for irradiance estimation, on the FPGA, showcase the real-time processing capabilities of the accelerator. Finally, the thesis contributes the design details of a bi-directional interfacing system for high-throughput and fault-tolerant image transfers between deep learning embedded accelerators, in the context of on-board payload data processing architectures. The interfacing system is developed for interfacing an FPGA with the Intel Movidius Myriad 2 and the extensive testing campaign based on both commercial as well as prototype hardware platforms, shows that it can achieve a bit-rate of up to 2.4 Gbps duplex image data transfers

    Towards Complete Emulation of Quantum Algorithms using High-Performance Reconfigurable Computing

    Get PDF
    Quantum computing is a promising technology that can potentially demonstrate supremacy over classical computing in solving specific classically-intractable problems. However, in its current nascent stage, quantum computing faces major challenges. Two of the main challenges are quantum state decoherence and low scalability of current quantum devices. Decoherence is a process in which the state of the quantum computer is destroyed by interaction with the environment. Decoherence places constraints on the realistic applicability of quantum algorithms as real-life applications usually require complex equivalent quantum circuits to be realized. For example, encoding classical data on quantum computers for solving I/O and data-intensive applications generally requires complex quantum circuits that violate decoherence constraints. In addition, current quantum devices are of intermediate scale, having low quantum bit (qubit) counts and often producing inaccurate or noisy measurements. Consequently, benchmarking of existing quantum algorithms and the investigation of new applications are heavily dependent on classical simulations that use costly, resource-intensive computing platforms. Hardware-based emulation has been alternatively proposed as a more cost-effective and power-efficient approach. Hardware-based emulation methods can take advantage of hardware parallelism and acceleration to produce results at a higher throughput and lower power requirements.This work proposes a hardware-based emulation methodology for quantum algorithms, using cost-effective Field Programmable Gate Array (FPGA) technology. The proposed methodology consists of three components that are required for complete emulation of quantum algorithms; the first component models classical-to-quantum (C2Q) data encoding, the second emulates the behavior of quantum algorithms, and the third models the process of measuring the quantum state and extracting classical information, i.e., quantum-to-classical (Q2C) data decoding. The proposed emulation methodology is used to investigate and optimize methods for C2Q/Q2C data encoding/decoding, as well as several important quantum algorithms such as Quantum Fourier Transform (QFT), Quantum Haar Transform (QHT), and Quantum Grover’s Search (QGS). This work delivers contributions in terms of reducing complexities of quantum circuits, extending and optimizing quantum algorithms, and developing new quantum applications. For example, decoherence-optimized circuits for C2Q/Q2C data encoding/decoding are proposed and evaluated using the proposed emulation methodology. Multi-level decomposable forms of optimized QHT circuits are presented and used to demonstrate dimension reduction of high-resolution data. Additionally, a novel extension to the QGS algorithm is proposed to enable search for dynamically changing multi-patterns of unordered data. Finally, a novel quantum application is presented that combines QHT and dynamic multi-pattern QGS to perform pattern recognition using dimension reduction on high-resolution spatio-spectral data. For higher emulation performance and scalability of the framework, hardware design techniques and hardware architectural optimizations are investigated and proposed. The emulation architectures are designed and implemented on a high-performance reconfigurable computer (HPRC). For reference and comparison, implementations of the proposed quantum circuits are also performed on a state-of-the-art quantum computer. Experimental results show that the proposed hardware architectures enable emulation of quantum algorithms with higher scalability, higher accuracy, and higher throughput, compared to existing hardware-based emulators. As a case study, quantum image processing using multi-spectral images is considered for the experimental evaluations. The analysis and results of this work demonstrate that quantum computers and methodologies based on quantum algorithms will be highly useful in realistic data-intensive domains such as remote-sensing hyperspectral imagery and high-energy physics (HEP)
    corecore