189 research outputs found

    Software Porting of a 3D Reconstruction Algorithm to Razorcam Embedded System on Chip

    Get PDF
    A method is presented to calculate depth information for a UAV navigation system from Keypoints in two consecutive image frames using a monocular camera sensor as input and the OpenCV library. This method was first implemented in software and run on a general-purpose Intel CPU, then ported to the RazorCam Embedded Smart-Camera System and run on an ARM CPU onboard the Xilinx Zynq-7000. The results of performance and accuracy testing of the software implementation are then shown and analyzed, demonstrating a successful port of the software to the RazorCam embedded system on chip that could potentially be used onboard a UAV with tight constraints of size, weight, and power. The potential impacts will be seen through the continuation of this research in the Smart ES lab at University of Arkansas

    Optical simulation study for high resolution monolithic detector design for TB-PET

    Get PDF
    Background The main limitations in positron emission tomography (PET) are the limited sensitivity and relatively poor spatial resolution. The administered radioactive dose and scan time could be reduced by increasing system sensitivity with a total-body (TB) PET design. The second limitation, spatial resolution, mainly originates from the specific design of the detectors that are implemented. In state-of-the-art scanners, the detectors consist of pixelated crystal arrays, where each individual crystal is isolated from its neighbors with reflector material. To obtain higher spatial resolution the crystals can be made narrower which inevitably leads to more inter-crystal scatter and larger dead space between the crystals. A monolithic detector design shows superior characteristics in (i) light collection efficiency (no gaps), (ii) timing, as it significantly reduces the number of reflections and therefore the path length of each scintillation photon and (iii) spatial resolution (including better depth-of-interaction (DOI)). The aim of this work is to develop a precise simulation model based on measured crystal data and use this powerful tool to find the limits in spatial resolution for a monolithic detector for the use in TB-PET. Materials and methods A detector (Fig. 1) based on a monolithic 50x50x16 mm3 lutetium-(yttrium) oxyorthosilicate (L(Y)SO) scintillation crystal coupled to an 8x8 array of 6x6mm2 silicon photomultipliers (SiPMs) is simulated with GATE. A recently implemented reflection model for scintillation light allows simulations based on measured surface data (1). The modeled surfaces include black painted rough finishing on the crystal sides (16x50mm2) and a specular reflector attached to a polished crystal top (50x50mm2). Maximum Likelihood estimation (MLE) is used for positioning the events. Therefore, calibration data is obtained by generating 3.000 photo-electric events at given calibration positions (Fig. 1). Compton scatter is not (yet) included. In a next step, the calibration data is organized in three layers based on the exact depth coordinate in the crystal (i.e. DOI assumed to be known). For evaluating the resolution, the full width at half maximum (FWHM) is estimated at the irradiated positions of Fig. 2 as a mean of all profiles in vertical and horizontal direction. Next, uniformity is evaluated by simulating 200k events from a flood source, placed in the calibrated area. Results For the irradiation pattern in Fig. 2 the resolution in terms of FWHM when applying MLE is: 0.86±0.13mm (Fig. 3a). Nevertheless, there are major artifacts also at non-irradiated positions. By positioning the events based on three DOI-based layers it can be seen that the events closest to the photodetector introduce the largest artifacts (Fig. 3b-d). The FWHM improves for Layer 1 and 2, to 0.69±0.04mm and 0.59±0.02mm, respectively. Layer 3 introduces major artifacts to the flood map, as events are positioned at completely different locations as the initial irradiation. A FWHM estimation is thus not useful. The uniformity (Fig. 4) degrades with proximity to the photodetector. The map in Fig. 4c shows that the positioning accuracy depends not only on DOI but also the position in the plane parallel to the photodetector array. Conclusions A simulation model for a monolithic PET detector with good characteristics for TB-PET systems was developed with GATE. A first estimate of the spatial resolution and uniformity was given, pointing out the importance of depth-dependent effects. Future studies will include several steps towards more realistic simulations e.g. surface measurements of our specific crystals for the optical surface model and inclusion of the Compton effect

    Studio e realizzazione di un'architettura VLSI di un processore per l'implementazione dell'algoritmo FFT

    Get PDF
    Poiché lo standard di connessione 5G è utilizzato da un numero sempre crescente di dispositivi e si sta evolvendo per soddisfare nuove esigenze e requisiti, è diventato fondamentale studiare e progettare nuovi trasmettitori e ricevitori più veloci ed efficienti. Un ruolo fondamentale nella connessione 5G è svolto dal multiplexing a divisione di frequenza ortogonale (OFDM), una metodologia di modulazione. Poiché la demodulazione è basata sulla trasformata di Fourier, lo scopo di questa tesi è realizzare un processore in grado di implementare algoritmi FFT e DFT su sequenze di lunghezza variabile che rispetti i criteri dello standard 5G. Per fare ciò, è stata prima condotta un'analisi del rapporto dell'Unione internazionale delle telecomunicazioni ITU-R M.2410-0 per definire i requisiti minimi per il processore. Successivamente, uno studio dello stato dell'arte per dispositivi simili ha portato allo sviluppo di un'architettura VLSI adatta all'applicazione. Una versione RTL dell'architettura è stata implementata in VHDL e testata.Since the 5G connection standard is utilized by a rising number of devices and is evolving to meet new needs and requirements, it has become crucial to study and design new, faster, and more efficient transmitters and receivers. A fundamental role in the 5G connection is played by Orthogonal frequency-division multiplexing (OFDM), an encoding methodology. Since the demodulation is based on the Fourier Transform, the purpose of this thesis is to realize a processor capable of implementing FFT and DFT algorithms on variable length sequences that complies with the 5G standard criteria. In order to do so, first an analysis of the International Telecommunication Union report ITU-R M.2410-0 has been conducted to define the minimum requirements for the processor. Then, a study of the state of the art for similar devices led to the development of a VLSI architecture suitable for the application. An RTL version of the architecture has been implemented in VHDL and tested

    Heterogeneous Acceleration for 5G New Radio Channel Modelling Using FPGAs and GPUs

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Massively-parallel and concurrent SVM architectures

    Get PDF
    This work presents several Support Vector Machine (SVM) architectures developed by the Author with the intent of exploiting the inherent parallel structures and potential- concurrency underpinning the SVM’s mathematical operation. Two SVM training sub- system prototypes are presented - a brute-force search classification training architecture, and, Artificial Neural Network (ANN)-mapped optimisation architectures for both SVM classification training and SVM regression training. This work also proposes and proto- types a set of parallelised SVM Digital Signal Processor (DSP) pipeline architectures. The parallelised SVM DSP pipeline architectures have been modelled in C and implemented in VHDL for the synthesis and fitting on an Altera Stratix V FPGA. Each system pre- sented in this work has been applied to a problem domain application appropriate to the SVM system’s architectural limitations - including the novel application of the SVM as a chaotic and non-linear system parameter-identification tool. The SVM brute-force search classification training architecture has been modelled for datasets of 2 dimensions and composed of linear and non-linear problems requiring only 4 support vectors by utilising the linear kernel and the polynomial kernel respectively. The system has been implemented in Matlab and non-exhaustively verified using the holdout method with a trivial linearly separable classification problem dataset and a trivial non- linear XOR classification problem dataset. While the architecture was a feasible design for software-based implementations targeting 2-dimensional datasets the architectural com- plexity and unmanageable number of parallelisable operations introduced by increasing data-dimensionality and the number of support vectors subsequently resulted in the Au- thor pursuing different parallelised-architecture strategies. Two distinct ANN-mapped optimisation strategies developed and proposed for SVM classification training and SVM regression training have been modelled in Matlab; the architectures have been designed such that any dimensionality dataset can be applied by configuring the appropriate dimensionality and support vector parameters. Through Monte-Carlo testing using the datasets examined in this work the gain parameters in- herent in the architectural design of the systems were found to be difficult to tune, and, system convergence to acceptable sets of training support vectors were unachieved. The ANN-mapped optimisation strategies were thus deemed inappropriate for SVM training with the applied datasets without more design effort and architectural modification work. The parallelised SVM DSP pipeline architecture prototypes data-set dimensionality, sup- port vector set counts, and latency ranges follow. In each case the Field Programmable Gate Array (FPGA) pipeline prototype latency unsurprisingly outclassed the correspond- ing C-software model execution times by at least 3 orders of magnitude. The SVM classi- fication training DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range spanning from a minimum of 0.18 microseconds to a maximum of 0.28 mi- croseconds. The SVM classification function evaluation DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 32 support vectors, and have a pipeline latency range spanning from a minimum of 0.16 microseconds to a maximum of 0.24 microseconds. The SVM regression training DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range span- ning from a minimum of 0.20 microseconds to a maximum of 0.30 microseconds. The SVM regression function evaluation DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range spanning from a minimum of 0.20 microseconds to a maximum of 0.30 microseconds. Finally, utilising LIBSVM training and the parallelised SVM DSP pipeline function eval- uation architecture prototypes, SVM classification and SVM regression was successfully applied to Rajkumar’s oil and gas pipeline fault detection and failure system legacy data- set yielding excellent results. Also utilising LIBSVM training, and, the parallelised SVM DSP pipeline function evaluation architecture prototypes, both SVM classification and SVM regression was applied to several chaotic systems as a feasibility study into the ap- plication of the SVM machine learning paradigm for chaotic and non-linear dynamical system parameter-identification. SVM classification was applied to the Lorenz Attrac- tor and an ANN-based chaotic oscillator to a reasonably acceptable degree of success. SVM classification was applied to the Mackey-Glass attractor yielding poor results. SVM regression was applied Lorenz Attractor and an ANN-based chaotic oscillator yielding av- erage but encouraging results. SVM regression was applied to the Mackey-Glass attractor yielding poor results

    Real-Time Trigger and online Data Reduction based on Machine Learning Methods for Particle Detector Technology

    Get PDF
    Moderne Teilchenbeschleuniger-Experimente generieren während zur Laufzeit immense Datenmengen. Die gesamte erzeugte Datenmenge abzuspeichern, überschreitet hierbei schnell das verfügbare Budget für die Infrastruktur zur Datenauslese. Dieses Problem wird üblicherweise durch eine Kombination von Trigger- und Datenreduktionsmechanismen adressiert. Beide Mechanismen werden dabei so nahe wie möglich an den Detektoren platziert um die gewünschte Reduktion der ausgehenden Datenraten so frühzeitig wie möglich zu ermöglichen. In solchen Systeme traditionell genutzte Verfahren haben währenddessen ihre Mühe damit eine effiziente Reduktion in modernen Experimenten zu erzielen. Die Gründe dafür liegen zum Teil in den komplexen Verteilungen der auftretenden Untergrund Ereignissen. Diese Situation wird bei der Entwicklung der Detektorauslese durch die vorab unbekannten Eigenschaften des Beschleunigers und Detektors während des Betriebs unter hoher Luminosität verstärkt. Aus diesem Grund wird eine robuste und flexible algorithmische Alternative benötigt, welche von Verfahren aus dem maschinellen Lernen bereitgestellt werden kann. Da solche Trigger- und Datenreduktion-Systeme unter erschwerten Bedingungen wie engem Latenz-Budget, einer großen Anzahl zu nutzender Verbindungen zur Datenübertragung und allgemeinen Echtzeitanforderungen betrieben werden müssen, werden oft FPGAs als technologische Basis für die Umsetzung genutzt. Innerhalb dieser Arbeit wurden mehrere Ansätze auf Basis von FPGAs entwickelt und umgesetzt, welche die vorherrschenden Problemstellungen für das Belle II Experiment adressieren. Diese Ansätze werden über diese Arbeit hinweg vorgestellt und diskutiert werden

    Full stack development toward a trapped ion logical qubit

    Get PDF
    Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates can be performed. The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator. This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated. The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger scale iterations.Open Acces

    Massively-parallel and concurrent SVM architectures

    Get PDF
    This work presents several Support Vector Machine (SVM) architectures developed by the Author with the intent of exploiting the inherent parallel structures and potential- concurrency underpinning the SVM’s mathematical operation. Two SVM training sub- system prototypes are presented - a brute-force search classification training architecture, and, Artificial Neural Network (ANN)-mapped optimisation architectures for both SVM classification training and SVM regression training. This work also proposes and proto- types a set of parallelised SVM Digital Signal Processor (DSP) pipeline architectures. The parallelised SVM DSP pipeline architectures have been modelled in C and implemented in VHDL for the synthesis and fitting on an Altera Stratix V FPGA. Each system pre- sented in this work has been applied to a problem domain application appropriate to the SVM system’s architectural limitations - including the novel application of the SVM as a chaotic and non-linear system parameter-identification tool. The SVM brute-force search classification training architecture has been modelled for datasets of 2 dimensions and composed of linear and non-linear problems requiring only 4 support vectors by utilising the linear kernel and the polynomial kernel respectively. The system has been implemented in Matlab and non-exhaustively verified using the holdout method with a trivial linearly separable classification problem dataset and a trivial non- linear XOR classification problem dataset. While the architecture was a feasible design for software-based implementations targeting 2-dimensional datasets the architectural com- plexity and unmanageable number of parallelisable operations introduced by increasing data-dimensionality and the number of support vectors subsequently resulted in the Au- thor pursuing different parallelised-architecture strategies. Two distinct ANN-mapped optimisation strategies developed and proposed for SVM classification training and SVM regression training have been modelled in Matlab; the architectures have been designed such that any dimensionality dataset can be applied by configuring the appropriate dimensionality and support vector parameters. Through Monte-Carlo testing using the datasets examined in this work the gain parameters in- herent in the architectural design of the systems were found to be difficult to tune, and, system convergence to acceptable sets of training support vectors were unachieved. The ANN-mapped optimisation strategies were thus deemed inappropriate for SVM training with the applied datasets without more design effort and architectural modification work. The parallelised SVM DSP pipeline architecture prototypes data-set dimensionality, sup- port vector set counts, and latency ranges follow. In each case the Field Programmable Gate Array (FPGA) pipeline prototype latency unsurprisingly outclassed the correspond- ing C-software model execution times by at least 3 orders of magnitude. The SVM classi- fication training DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range spanning from a minimum of 0.18 microseconds to a maximum of 0.28 mi- croseconds. The SVM classification function evaluation DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 32 support vectors, and have a pipeline latency range spanning from a minimum of 0.16 microseconds to a maximum of 0.24 microseconds. The SVM regression training DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range span- ning from a minimum of 0.20 microseconds to a maximum of 0.30 microseconds. The SVM regression function evaluation DSP pipeline FPGA prototypes are compatible with data-sets spanning 2 to 8 dimensions, support vector sets of up to 16 support vectors, and have a pipeline latency range spanning from a minimum of 0.20 microseconds to a maximum of 0.30 microseconds. Finally, utilising LIBSVM training and the parallelised SVM DSP pipeline function eval- uation architecture prototypes, SVM classification and SVM regression was successfully applied to Rajkumar’s oil and gas pipeline fault detection and failure system legacy data- set yielding excellent results. Also utilising LIBSVM training, and, the parallelised SVM DSP pipeline function evaluation architecture prototypes, both SVM classification and SVM regression was applied to several chaotic systems as a feasibility study into the ap- plication of the SVM machine learning paradigm for chaotic and non-linear dynamical system parameter-identification. SVM classification was applied to the Lorenz Attrac- tor and an ANN-based chaotic oscillator to a reasonably acceptable degree of success. SVM classification was applied to the Mackey-Glass attractor yielding poor results. SVM regression was applied Lorenz Attractor and an ANN-based chaotic oscillator yielding av- erage but encouraging results. SVM regression was applied to the Mackey-Glass attractor yielding poor results

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    An Image Processing Approach Toward a Visual Intra-Cortical Stimulator

    Get PDF
    Abstract Visual impairment may be caused by various factors varying from trauma, birth-defects, and diseases. Until today there are no viable medical treatments for this condition; hence bio-medical approaches are being employed to overcome that. The Cortivision team has been working on an intra-cortical implant that can bypass the retina and optic nerve and directly stimulate the visual cortex. In this work we aimed to implement a modular, reusable, and parameterizable object recognition system that tends to ``simplify'' video data prior to stimulation; hence opening new horizons for partial vision restoration, navigational and even recognition abilities. We identified the Scale Invariant Feature Transform (SIFT) algorithm as being a robust candidate for our application's needs. A multithreaded software prototype of the SIFT and Lucas-Kanade tracker was implemented to ensure proper overall operation. The feature extractor, difference of Gaussians (DoG) part of the SIFT, being the most computationally expensive, was migrated to an FPGA implementation due to the real-time restrictions that is not achievable on a host machine. The VHDL implementation is highly parameterizable for different application needs and tradeoffs. We introduced a novel architecture employing the sub-kernel trick to reduce resource usage compared to preexisting architectures while still being comparably accurate to a software floating point implementation. In order to alleviate transmission bottlenecks, the system also includes a new parallel Huffman encoder design that is capable of performing lossless compression of both images and scale space image pyramids taking into account spatial and scale data correlations during the predictor phase. The encoder was able to achieve compression ratios of 27.3% on the Caltech-256 data-set. Furthermore, a new camera and fiducial markers setup based on image processing was proposed in order to target the phosphene map estimation problem which affects the quality of the final stimulation that is perceived by the patient.----------RÉSUMÉ Introduction et objectifs La déficience visuelle, qui est définie par la perte totale ou partielle de la vision, n'est actuellement pas médicalement traitable. Des approches biomédicales modernes sont utilisées pour stimuler électriquement la vision; ces approches peuvent être divisées en trois groupes principaux: le premier ciblant les implants rétiniens Humayun et al. (2003), Kim et al. (2004), Chow et al. (2004); Palanker et al. (2005), Toledo et al. (2005); Yanai et al. (2007), Winter et al. (2007); Zrenner et al. (2011), le deuxième ciblant les implants du nerf optique Veraart et al. (2003), Sakaguchi et al. (2009), et le troisième ciblant les implants intra-corticaux Doljanu et Sawan (2007); Coulombe et al. (2007); Srivastava et al. (2007). L’inconvénient principal des deux premiers groupes, c'est qu'ils ne sont pas suffisamment génériques pour surmonter la majorité des maladies de déficience visuelle, car ils dépendent du fait que le patient doit avoir un nerf optique intact et/ou une rétine partiellement opérationnelle ; ce qui n'est pas le cas pour le troisième groupe. L'équipe du Laboratoire Polystim Neurotechnologies travaille actuellement sur un implant intra-cortical qui stimule directement le cortex visuel primaire (région V1) ; le nom du projet global est Cortivision. Le système utilise une caméra, un module de traitement d'image, un transmetteur RF (radiofréquence) et un stimulateur implantable. Cette méthode est robuste et générique car elle contourne l'oeil et le nerf optique. Un des défis majeurs est le traitement d'image nécessaire pour «simplifier» les données antérieures à la stimulation, l'extraction de l’information utile en écartant les données superflues. Les pixels qui sont capturés par la caméra n'ont pas de correspondance un-à-un sur le cortex visuel comme dans une image rectangulaire, ils sont plutôt mis en correspondance avec une carte complexe de «phosphènes» Coulombe et al. (2007); Srivastava et al. (2007). Les phosphènes sont des points lumineux qui apparaissent dans le champ de vision du patient quand le cerveau est stimulé électriquement. Ces points changent en terme de taille, de luminosité et d’emplacement en fonction de la façon dont la stimulation électrique est effectuée (c'est à dire un changement dans la fréquence, la tension, la durée, etc. ...) et même par le placement physique des électrodes dans le cortex visuel. Les approches actuelles visent à stimuler des images de phosphènes monochromes à basse résolution. Sachant cela, nous nous attendons plutôt à une vision de faible qualité qui rend des activités comme naviguer, interpréter des objets, ou encore lire, difficile pour le patient. Ceci est principalement dû à la complexité de l’étalonnage de la carte phosphène et sa correspondance, et aussi à la non-trivialité de savoir comment simplifier les données à partir des images qui viennent de la camera de façon qu’on conserve seulement les données pertinentes. La Figure 1.1 est un exemple qui démontre la non-trivialité de transformer une image grise en stimulation phosphène
    • …
    corecore