9,981 research outputs found

    Highly Efficient Spectral Calibration Methods for Swept-Source Optical Coherence Tomography

    Get PDF
    Recent techniques in optical coherence tomography (OCT) make use of specialized light sources that sweep across a broad optical bandwidth, allowing for longer depth ranges at higher resolutions. The produced light source signal can be described as a gaussian damped sinusoid that non-uniformly sweeps across a narrow frequency band. When sampling this interferometric signal uniformly, the generated images present considerable distortion, because the spectral information is a function of wavenumber "k", not time. To solve this problem a "calibration" step needs to be performed; in this process, the acquired interferogram signal is linearized into k-space. The process usually involves estimating the phase-frequency change profile of the SS-OCT system via Hilbert transformation, inverse tangent and phase unwrapping. In this thesis, a multitude of low complexity, computationally efficient methods for the real-time calibration of Swept Source Optical Coherence Tomography (SS-OCT) systems are implemented and results are evaluated against commonly performed calibration techniques such as Hilbert transformation. Simulation shows execution times decisively improved by up to a factor of ten, depending on the used technique. Axial resolution was also slightly improved across all the tested techniques. Moreover, the inverse tangent and phase unwrapping steps necessary for Hilbert transform calibration techniques are eliminated, vastly reducing circuit implementation complexity and making the system suitable for future inexpensive, power efficient, on-chip solutions in SS-OCT post-processing

    Systems and Methods for the Spectral Calibration of Swept Source Optical Coherence Tomography Systems

    Get PDF
    This dissertation relates to the transition of the state of the art of swept source optical coherence tomography (SS-OCT) systems to a new realm in which the image acquisition speed is improved by an order of magnitude. With the aid of a better quality imaging technology, the speed-up factor will considerably shorten the eye-exam clinical visits which in turn improves the patient and doctor interaction experience. These improvements will directly lower associated medical costs for eye-clinics and patients worldwide. There are several other embodiments closely related to Optical Coherence Tomography (OCT) that could benefit from the ideas presented in this dissertation including: optical coherence microscopy (OCM), full-field OCT (FF-OCT), optical coherence elastography (OCE), optical coherence tomography angiography (OCT-A), anatomical OCT (aOCT), optical coherence photoacoustic microscopy (OC-PAM), micro optical coherence tomography (µ OCT), among others. In recent decades, OCT has established itself as the de-facto imaging process that most ophthalmologists refer to in their clinical practices. In a broader sense, optical coherence tomography is used in applications when low penetration and high resolution are desired. These applications include different fields of biomedical sciences including cardiology, dermatology, and pulmonary related sciences. Many other industrial applications including quality control and precise measurements have also been reported that are related to the OCT technology. Every new iteration of OCT technology has always come about with advanced signal processing and data acquisition algorithms using mixed-signal architectures, calibration and signal processing techniques. The existing industrial practices towards data acquisition, processing, and image creation relies on conventional signal processing design flows, which extensively employ continuous/discrete techniques that are both time-consuming and costly. The ideas presented in this dissertation can take the technology to a new dimension of quality of service

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    A Novel Approach To Intelligent Navigation Of A Mobile Robot In A Dynamic And Cluttered Indoor Environment

    Get PDF
    The need and rationale for improved solutions to indoor robot navigation is increasingly driven by the influx of domestic and industrial mobile robots into the market. This research has developed and implemented a novel navigation technique for a mobile robot operating in a cluttered and dynamic indoor environment. It divides the indoor navigation problem into three distinct but interrelated parts, namely, localization, mapping and path planning. The localization part has been addressed using dead-reckoning (odometry). A least squares numerical approach has been used to calibrate the odometer parameters to minimize the effect of systematic errors on the performance, and an intermittent resetting technique, which employs RFID tags placed at known locations in the indoor environment in conjunction with door-markers, has been developed and implemented to mitigate the errors remaining after the calibration. A mapping technique that employs a laser measurement sensor as the main exteroceptive sensor has been developed and implemented for building a binary occupancy grid map of the environment. A-r-Star pathfinder, a new path planning algorithm that is capable of high performance both in cluttered and sparse environments, has been developed and implemented. Its properties, challenges, and solutions to those challenges have also been highlighted in this research. An incremental version of the A-r-Star has been developed to handle dynamic environments. Simulation experiments highlighting properties and performance of the individual components have been developed and executed using MATLAB. A prototype world has been built using the WebotsTM robotic prototyping and 3-D simulation software. An integrated version of the system comprising the localization, mapping and path planning techniques has been executed in this prototype workspace to produce validation results

    Blind Demixing for Low-Latency Communication

    Full text link
    In the next generation wireless networks, lowlatency communication is critical to support emerging diversified applications, e.g., Tactile Internet and Virtual Reality. In this paper, a novel blind demixing approach is developed to reduce the channel signaling overhead, thereby supporting low-latency communication. Specifically, we develop a low-rank approach to recover the original information only based on a single observed vector without any channel estimation. Unfortunately, this problem turns out to be a highly intractable non-convex optimization problem due to the multiple non-convex rankone constraints. To address the unique challenges, the quotient manifold geometry of product of complex asymmetric rankone matrices is exploited by equivalently reformulating original complex asymmetric matrices to the Hermitian positive semidefinite matrices. We further generalize the geometric concepts of the complex product manifolds via element-wise extension of the geometric concepts of the individual manifolds. A scalable Riemannian trust-region algorithm is then developed to solve the blind demixing problem efficiently with fast convergence rates and low iteration cost. Numerical results will demonstrate the algorithmic advantages and admirable performance of the proposed algorithm compared with the state-of-art methods.Comment: 14 pages, accepted by IEEE Transaction on Wireless Communicatio

    Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers

    Full text link
    A massive gap exists between current quantum computing (QC) prototypes, and the size and scale required for many proposed QC algorithms. Current QC implementations are prone to noise and variability which affect their reliability, and yet with less than 80 quantum bits (qubits) total, they are too resource-constrained to implement error correction. The term Noisy Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems of 1000 qubits or less. Given NISQ's severe resource constraints, low reliability, and high variability in physical characteristics such as coherence time or error rates, it is of pressing importance to map computations onto them in ways that use resources efficiently and maximize the likelihood of successful runs. This paper proposes and evaluates backend compiler approaches to map and optimize high-level QC programs to execute with high reliability on NISQ systems with diverse hardware characteristics. Our techniques all start from an LLVM intermediate representation of the quantum program (such as would be generated from high-level QC languages like Scaffold) and generate QC executables runnable on the IBM Q public QC machine. We then use this framework to implement and evaluate several optimal and heuristic mapping methods. These methods vary in how they account for the availability of dynamic machine calibration data, the relative importance of various noise parameters, the different possible routing strategies, and the relative importance of compile-time scalability versus runtime success. Using real-system measurements, we show that fine grained spatial and temporal variations in hardware parameters can be exploited to obtain an average 2.92.9x (and up to 1818x) improvement in program success rate over the industry standard IBM Qiskit compiler.Comment: To appear in ASPLOS'1

    Robot kinematic structure classification from time series of visual data

    Full text link
    In this paper we present a novel algorithm to solve the robot kinematic structure identification problem. Given a time series of data, typically obtained processing a set of visual observations, the proposed approach identifies the ordered sequence of links associated to the kinematic chain, the joint type interconnecting each couple of consecutive links, and the input signal influencing the relative motion. Compared to the state of the art, the proposed algorithm has reduced computational costs, and is able to identify also the joints' type sequence
    corecore