544 research outputs found

    A spread spectrum approach to time-domain near-infrared diffuse optical imaging using inexpensive optical transceiver modules

    Get PDF
    We introduce a compact time-domain system for near-infrared spectroscopy using a spread spectrum technique. The proof-of-concept single channel instrument utilises a low-cost commercially available optical transceiver module as a light source, controlled by a Kintex 7 field programmable gate array (FPGA). The FPGA modulates the optical transceiver with maximum-length sequences at line rates up to 10Gb/s, allowing us to achieve an instrument response function with full width at half maximum under 600ps. The instrument is characterised through a set of detailed phantom measurements as well as proof-of-concept in vivo measurements, demonstrating performance comparable with conventional pulsed time-domain near-infrared spectroscopy systems

    Hardware implementation algorithm and error analysis of high-speed fluorescence lifetime sensing systems using center-of-mass method

    Get PDF
    A new, simple, high-speed, and hardware-only integrationbased fluorescence-lifetime-sensing algorithm using a center-of-mass method CMM is proposed to implement lifetime calculations, and its signal-to-noise-ratio based on statistics theory is also deduced. Compared to the commonly used iterative least-squares method or the maximum-likelihood-estimation–based, general purpose fluorescence lifetime imaging microscopy FLIM analysis software, the proposed hardware lifetime calculation algorithm with CMM offers direct calculation of fluorescence lifetime based on the collected photon counts and timing information provided by in-pixel circuitry and therefore delivers faster analysis for real-time applications, such as clinical diagnosis. A real-time hardware implementation of this CMM FLIM algorithm suitable for a single-photon avalanche diode array in CMOS imaging technology is now proposed for implementation on field-programmable gate array. The performance of the proposed methods has been tested on Fluorescein, Coumarin 6, and 1,8- anilinonaphthalenesulfonate in water/methanol mixture

    Definition of a FPGA-based SoC architecture for PRBS transmission in optical spectroscopy

    Get PDF
    Optical spectroscopy is a well-known tool typically employed for characterizing the properties of materials by analyzing their iteration with light. One of the most spread techniques is the dual comb spectroscopy, since it accomplishes ultra-high resolution, and high sensitivity measurements with a relatively simple platform including a single, relatively narrowband photodetector. The employed optical dual comb can be implemented through electro-optical (EO) modulation driven by pseudo-ransom binary sequences (PRBS) at high data rates, commonly in the range of tens of Gbps. For that purpose, the runtime generation and transmission of adaptive PRBS is still an open challenge, often involving expensive and not flexible high-speed digital systems, with a few commercially available solutions that sometimes do not match the application requirements efficiently. In this context, this work describes the definition and implementation of a System-on-Chip (SoC) architecture, based on a FPGA device, capable of generating and transmitting two PRBS for a dual comb, at a data rate up to 5 Gbps. The architecture can be configured and its operation modified in run time, thanks to the general-purpose processor involved, in charge of managing an Ethernet link to receive new PRBS to be transmitted or set up certain parameters. The proposed design has been validated experimentally on a dual comb spectroscopy measurement, where the absorption of a hydrogen cyanide (HCN) gas cell has been successfully characterized.Agencia Estatal de InvestigaciónMinisterio de Ciencia e Innovació

    Strategies towards high performance (high-resolution/linearity) time-to-digital converters on field-programmable gate arrays

    Get PDF
    Time-correlated single-photon counting (TCSPC) technology has become popular in scientific research and industrial applications, such as high-energy physics, bio-sensing, non-invasion health monitoring, and 3D imaging. Because of the increasing demand for high-precision time measurements, time-to-digital converters (TDCs) have attracted attention since the 1970s. As a fully digital solution, TDCs are portable and have great potential for multichannel applications compared to bulky and expensive time-to-amplitude converters (TACs). A TDC can be implemented in ASIC and FPGA devices. Due to the low cost, flexibility, and short development cycle, FPGA-TDCs have become promising. Starting with a literature review, three original FPGA-TDCs with outstanding performance are introduced. The first design is the first efficient wave union (WU) based TDC implemented in Xilinx UltraScale (20 nm) FPGAs with a bubble-free sub-TDL structure. Combining with other existing methods, the resolution is further enhanced to 1.23 ps. The second TDC has been designed for LiDAR applications, especially in driver-less vehicles. Using the proposed new calibration method, the resolution is adjustable (50, 80, and 100 ps), and the linearity is exceptionally high (INL pk-pk and INL pk-pk are lower than 0.05 LSB). Meanwhile, a software tool has been open-sourced with a graphic user interface (GUI) to predict TDCs’ performance. In the third TDC, an onboard automatic calibration (AC) function has been realized by exploiting Xilinx ZYNQ SoC architectures. The test results show the robustness of the proposed method. Without the manual calibration, the AC function enables FPGA-TDCs to be applied in commercial products where mass production is required.Time-correlated single-photon counting (TCSPC) technology has become popular in scientific research and industrial applications, such as high-energy physics, bio-sensing, non-invasion health monitoring, and 3D imaging. Because of the increasing demand for high-precision time measurements, time-to-digital converters (TDCs) have attracted attention since the 1970s. As a fully digital solution, TDCs are portable and have great potential for multichannel applications compared to bulky and expensive time-to-amplitude converters (TACs). A TDC can be implemented in ASIC and FPGA devices. Due to the low cost, flexibility, and short development cycle, FPGA-TDCs have become promising. Starting with a literature review, three original FPGA-TDCs with outstanding performance are introduced. The first design is the first efficient wave union (WU) based TDC implemented in Xilinx UltraScale (20 nm) FPGAs with a bubble-free sub-TDL structure. Combining with other existing methods, the resolution is further enhanced to 1.23 ps. The second TDC has been designed for LiDAR applications, especially in driver-less vehicles. Using the proposed new calibration method, the resolution is adjustable (50, 80, and 100 ps), and the linearity is exceptionally high (INL pk-pk and INL pk-pk are lower than 0.05 LSB). Meanwhile, a software tool has been open-sourced with a graphic user interface (GUI) to predict TDCs’ performance. In the third TDC, an onboard automatic calibration (AC) function has been realized by exploiting Xilinx ZYNQ SoC architectures. The test results show the robustness of the proposed method. Without the manual calibration, the AC function enables FPGA-TDCs to be applied in commercial products where mass production is required

    Design and Test of an Event Detector and Locator for the ReflectoActive™ Seals System

    Get PDF
    The purpose of this thesis was to research, design, develop and test a novel instrument for detecting fiber optic loop continuity and spatially locating fiber optic breaches. The work is for an active seal system called ReflectoActive™ Seals whose purpose is to provide real time container tamper indication. A Field Programmable Gate Array was used to implement a loop continuity detector and a spatial breach locator based on a high acquisition speed single photon counting optical time domain reflectometer. Communication and other control features were added in order to create a usable instrument that met defined requirements. A host graphical user interface was developed to illustrate system use and performance. The resulting device meets performance specifications by exhibiting a dynamic range of 27dB and a spatial resolution of 1.5 ft. The communication scheme used expands installation options and allows the device to communicate to a central host via existing Local Area Networks and/or the Internet

    All-Silicon-Based Photonic Quantum Random Number Generators

    Get PDF
    Random numbers are fundamental elements in different fields of science and technology such as computer simulation like Monte Carlo-method simulation, statistical sampling, cryptography, games and gambling, and other areas where unpredictable results are necessary. Random number generators (RNG) are generally classified as “pseudo”-random number generators (PRNG) and "truly" random number generators (TRNG). Pseudo random numbers are generated by computer algorithms with a (random) seed and a specific formula. The random numbers produced in this way (with a small degree of unpredictability) are good enough for some applications such as computer simulation. However, for some other applications like cryptography they are not completely reliable. When the seed is revealed, the entire sequence of numbers can be produced. The periodicity is also an undesirable property of PRNGs that can be disregarded for most practical purposes if the sequence recurs after a very long period. However, the predictability still remains a tremendous disadvantage of this type of generators. Truly random numbers, on the other hand, can be generated through physical sources of randomness like flipping a coin. However, the approaches exploiting classical motion and classical physics to generate random numbers possess a deterministic nature that is transferred to the generated random numbers. The best solution is to benefit from the assets of indeterminacy and randomness in quantum physics. Based on the quantum theory, the properties of a particle cannot be determined with arbitrary precision until a measurement is carried out. The result of a measurement, therefore, remains unpredictable and random. Optical phenomena including photons as the quanta of light have various random, non-deterministic properties. These properties include the polarization of the photons, the exact number of photons impinging a detector and the photon arrival times. Such intrinsically random properties can be exploited to generate truly random numbers. Silicon (Si) is considered as an interesting material in integrated optics. Microelectronic chips made from Si are cheap and easy to mass-fabricate, and can be densely integrated. Si integrated optical chips, that can generate, modulate, process and detect light signals, exploit the benefits of Si while also being fully compatible with electronic. Since many electronic components can be integrated into a single chip, Si is an ideal candidate for the production of small, powerful devices. By complementary metal-oxide-semiconductor (CMOS) technology, the fabrication of compact and mass manufacturable devices with integrated components on the Si platform is achievable. In this thesis we aim to model, study and fabricate a compact photonic quantum random number generator (QRNG) on the Si platform that is able to generate high quality, "truly" random numbers. The proposed QRNG is based on a Si light source (LED) coupled with a Si single photon avalanche diode (SPAD) or an array of SPADs which is called Si photomultiplier (SiPM). Various implementations of QRNG have been developed reaching an ultimate geometry where both the source and the SPAD are integrated on the same chip and fabricated by the same process. This activity was performed within the project SiQuro—on Si chip quantum optics for quantum computing and secure communications—which aims to bring the quantum world into integrated photonics. By using the same successful paradigm of microelectronics—the study and design of very small electronic devices typically made from semiconductor materials—, the vision is to have low cost and mass manufacturable integrated quantum photonic circuits for a variety of different applications in quantum computing, measure, sensing, secure communications and services. The Si platform permits, in a natural way, the integration of quantum photonics with electronics. Two methodologies are presented to generate random numbers: one is based on photon counting measurements and another one is based on photon arrival time measurements. The latter is robust, masks all the drawbacks of afterpulsing, dead time and jitter of the Si SPAD and is effectively insensitive to ageing of the LED and to its emission drifts related to temperature variations. The raw data pass all the statistical tests in national institute of standards and technology (NIST) tests suite and TestU01 Alphabit battery without a post processing algorithm. The maximum demonstrated bit rate is 1.68 Mbps with the efficiency of 4-bits per detected photon. In order to realize a small, portable QRNG, we have produced a compact configuration consisting of a Si nanocrystals (Si-NCs) LED and a SiPM. All the statistical test in the NIST tests suite pass for the raw data with the maximum bit rate of 0.5 Mbps. We also prepared and studied a compact chip consisting of a Si-NCs LED and an array of detectors. An integrated chip, composed of Si p+/n junction working in avalanche region and a Si SPAD, was produced as well. High quality random numbers are produced through our robust methodology at the highest speed of 100 kcps. Integration of the source of entropy and the detector on a single chip is an efficient way to produce a compact RNG. A small RNG is an essential element to guarantee the security of our everyday life. It can be readily implemented into electronic devices for data encryption. The idea of "utmost security" would no longer be limited to particular organs owning sensitive information. It would be accessible to every one in everyday life

    Development of Trigger and Control Systems for CMS

    Get PDF
    During the year of 2007, the Large Hadron Collider (LHC) and its four main detectors will begin operation with a view to answering the most pressing questions in particle physics. However before one can analyse the data produced to find the rare phenomena being looked for, both the detector and readout electronics must be thoroughly tested to ensure that the system will operate in a consistent way. The Compact Muon Solenoid (CMS) is one of the two general-purpose detectors at CERN. The tracking component of the design produces more data than any previous detector used in particle physics, with approximately ten million detector channels. The data from the detector is processed by the tracker Front End Driver (FED). The large data volume necessitated the development of a buffering and throttling system to prevent buffer overflow both on and off the detector. A critical component of this system is the APV emulator (APVe), which vetoes trigger decisions based on buffer status in the tracker. The commissioning of these components, along with a large part of the Timing, Trigger and Control (TTC) system is discussed, including the various modifications that were made to improve the robustness of the full system. Another key piece of the CMS electronics is the calorimeter trigger system, responsible for identifying âinteresting' physical events in a background of well-understood phenomena using calorimetric information. Calorimeter information is processed to identify various trigger objects by the Global Calorimeter Trigger (GCT). The first component of this system is the Source card, which has been developed to transfer data from the Regional Calorimeter Trigger (RCT) to the Leaf card, the processing engine of the GCT. The use of modern programmable logic with high speed optical links is discussed, emphasising its use for data concentration and the benefit it confers to the processing algorithms. Looking forward to Super-LHC, a possible addition to the CMS Level-1 trigger system is discussed, incorporating information from a new pixel detector with an alternative stacked geometry that allows the possibility of on-detector data rate reduction by means of a transverse momentum cut. A toy Monte Carlo was developed to study detector performance. Issues with high-speed reconstruction and the complications of on-detector data rate reduction are also discussed
    corecore