9 research outputs found

    Structural and Functional Biomedical Imaging Using Polarization-Based Optical Coherence Tomography

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2015. Major: Biomedical Engineering. Advisor: Taner Akkin. 1 computer file (PDF); x, 103 pages.Biomedical imaging has had an enormous impact in medicine and research. There are numerous imaging modalities covering a large range of spatial and temporal scales, penetration depths, along with indicators for function and disease. As these imaging technologies mature, the quality of the images they produce increases to resolve finer details with greater contrast at higher speeds which aids in a faster, more accurate diagnosis in the clinic. In this dissertation, polarization-based optical coherence tomography (OCT) systems are used and developed to image biological structure and function with greater speeds, signal-to-noise (SNR) and stability. OCT can image with spatial and temporal resolutions in the micro range. When imaging any sample, feedback is very important to verify the fidelity and desired location on the sample being imaged. To increase frame rates for display as well as data throughput, field-programmable gate arrays (FPGAs) were used with custom algorithms to realize real-time display and streaming output for continuous acquisition of large datasets of swept-source OCT systems. For spectral domain (SD) OCT systems, significant increases in signal-to-noise ratios were achieved from a custom balanced detection (BD) OCT system. The BD system doubled measured signals while reducing common term. For functional imaging, a real-time directed scanner was introduced to visualize the 3D image of a sample to identify regions of interest prior to recording. Elucidating the characteristics of functional OCT signals with the aid of simulations, novel processing methods were also developed to stabilize samples being imaged and identify possible origins of functional signals being measured. Polarization-sensitive OCT was used to image cardiac tissue before and after clearing to identify the regions of vascular perfusion from a coronary artery. The resulting 3D image provides a visualization of the perfusion boundaries for the tissue that would be damaged from a myocardial infarction to possibly identity features that lead to fatal cardiac arrhythmias. 3D functional imaging was used to measure functional retinal activity from a light stimulus. In some cases, single trial responses were possible; measured at the outer segment of the photoreceptor layer. The morphology and time-course of these signals are similar to the intrinsic optical signals reported from phototransduction. Assessing function in the retina could aid in early detection of degenerative diseases of the retina, such as glaucoma and macular degeneration

    Applying Artificial Intelligence Planning to Optimise Heterogeneous Signal Processing for Surface and Dimensional Measurement Systems

    Get PDF
    The need for in-process measurement has surpassed the processing capability of traditional computer hardware. As Industry 4.0 changes the way modern manufacturing occurs, researchers and industry are turning to hardware acceleration to increase the performance of their signal processing to allow real-time process and quality control. This thesis reviewed Industry 4.0 and the challenges that have arisen from transitioning towards a connected smart factory. It has investigated the different hardware acceleration techniques available and the bespoke nature of software that industry and researchers are being forced towards in the pursuit of greater performance. In addition, the application of hardware acceleration within surface and dimensional instrument signal processing was researched and to what extent it is benefitting researchers. The collection of algorithms that the field are using were examined finding significant commonality across multiple instrument types, with work being repeated many times over by different people. The first use of PDDL to optimise heterogenous signal processing within surface and dimensional measurements is proposed. Optical Signal Processing Workspace (OSPW) is presented as a self-optimising software package using GPGPU acceleration using Compute Unified Device Architecture (CUDA)for Nvidia GPUs. OSPW was designed from scratch to be easy to use with very little-to-no programming experience needed, unlike other popular systems such LabVIEW and MATLAB. It provides an intuitive and easy to navigate User Interface (UI) that allows a user to select the signal processing algorithms required, display system outputs, control actuation devices, and modify capture device properties. OSPW automatically profiles the execution time of the signal processing algorithms selected by the user and creates and executes a fully optimised version using an AI planning language, Planning Description Domain Language (PDDL), by selecting the optimum architecture for each signal processing function. OSPW was then evaluated against two case studies, Dispersed Reference Interferometry (DRI) and Line-Scanning Dispersed Interferometry (LSDI). These case studies demonstrated that OSPW can achieve at least21x greater performance than an identical MATLAB implementation with a further 13% improvement found using the PDDL’s heterogenous solution. This novel approach to providing a configurable signal processing library that is self-optimising using AI planning will provide considerable performance gains to researchers and industrial engineers. With some additional development work it will save both academia and industry time and money which can be reinvested to further advance surface and dimensional instrumentation research

    FPGA-Cluster – Anwendungsgebiete und Kommunikationsstrukturen

    Get PDF
    Romoth J. FPGA-Cluster – Anwendungsgebiete und Kommunikationsstrukturen. Bielefeld: Universität Bielefeld; 2018.Fortschritte in der Fertigungstechnik von Halbleitern in Silizium ermöglichen hohe Integrationsdichten und somit den Entwurf von leistungsstarken digitalen logikverarbeitenden Elementen. Mit Hilfe hochparalleler anpassbarer flexibler Architekturen wie im Feld programmierbare Logik-Gatter-Anordnungen (engl.: Field Programmable Gate Array, FPGA) kann eine Vielzahl an Problemstellungen gelöst werden. Aufgrund der gebotenen Parallelität ist es selbst bei den verhältnismäßig geringen Taktraten des FPGAs, die den hochspezialisierten dedizierten Schaltungen anderer Systeme gegenüberstehen, möglich, harte Echtzeitschranken bei der Lösungsberechnung einzuhalten. Darüber hinaus ist die Energieeffizienz aufgrund des proportionalen Einflusses der Taktrate auf die dynamische Verlustleistung von Schaltungen wesentlich höher. Dennoch erfordern unterschiedliche Anwendungsszenarien von FPGAs eine derart hohe Anzahl an Logikressourcen, dass nur die Bündelung mehrerer FPGAs zu einem vernetzten Cluster eine effiziente Verarbeitung gewährleistet. Im Verlauf dieser Arbeit werden die Anforderungen an eine FPGA-Cluster-Lösung herausgestellt. Anhand eines Überblicks über die typischen Anwendungsfelder rekonfigurierbarer Logiksysteme können die grundlegenden Voraussetzungen identifiziert werden, die eine universell einsetzbare FPGA-Cluster-Architektur erfüllen muss. Insbesondere an die Kommunikationsinfrastruktur zwischen den einzelnen FPGAs im Cluster werden hohe Anforderungen in Bezug auf die Flexibilität gestellt. Die Anpassbarkeit an die individuellen Anforderungen der eingesetzten Algorithmen ist somit neben der Datenrate und der Latenz ein Kernelement bei der Entwicklung des FPGA-Clusters. Zur Evaluation von Systementwürfen wird eine Modellierung erarbeitet, die einen Vergleich auf Basis der Kommunikationsstrukturen ermöglicht. Eine darüber hinausgehende Optimierung des die Verbindungen im Cluster beschreibenden Graphen führt zu einer Minimierung der Latenz von Datenübertragungen und somit zu einer Leistungssteigerung des Gesamtsystems. Die identifizierten Anforderungen an ein flexibles, modulares und skalierbares FPGA-Cluster-System werden im Rahmen der Arbeit umgesetzt, so dass der RAPTOR-XPress-FPGA-Cluster entsteht, der zudem zur Steigerung der Ressourceneffizienz auf den Mehranwenderbetrieb ausgelegt ist. Auf diese Weise lassen sich in einer Anwendung ungenutzte FPGAs parallel für andere Aufgaben verwenden. Im Zusammenspiel mehrerer Arbeiten des Fachgebiets Kognitronik und Sensorik der Universität Bielefeld ist ein Beispielaufbau mit 16 RAPTOR-XPress-Trägersystemen und 64 FPGAs mit insgesamt 44 359 680 Logikzellen-Äquivalenten und 256 GB an lokalem Arbeitsspeicher realisiert worden. Durch die Umsetzung topologieoptimierter Verbindungsstrukturen kann eine gegenüber vergleichbaren Systemen um 28% gesteigerte Logikdichte erreicht werden, die zusammen mit der erzielbaren Datenrate von 16 x 11,5 Gbit/s die Leistungsfähigkeit der Kommunikationsinfrastruktur des FPGA-Clusters verdeutlicht

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    XSEDE: eXtreme Science and Engineering Discovery Environment Third Quarter 2012 Report

    Get PDF
    The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is an integrated cyberinfrastructure ecosystem with singular interfaces for allocations, support, and other key services that researchers can use to interactively share computing resources, data, and expertise.This a report of project activities and highlights from the third quarter of 2012.National Science Foundation, OCI-105357
    corecore