78 research outputs found

    Ultrafast Radiographic Imaging and Tracking: An overview of instruments, methods, data, and applications

    Full text link
    Ultrafast radiographic imaging and tracking (U-RadIT) use state-of-the-art ionizing particle and light sources to experimentally study sub-nanosecond dynamic processes in physics, chemistry, biology, geology, materials science and other fields. These processes, fundamental to nuclear fusion energy, advanced manufacturing, green transportation and others, often involve one mole or more atoms, and thus are challenging to compute by using the first principles of quantum physics or other forward models. One of the central problems in U-RadIT is to optimize information yield through, e.g. high-luminosity X-ray and particle sources, efficient imaging and tracking detectors, novel methods to collect data, and large-bandwidth online and offline data processing, regulated by the underlying physics, statistics, and computing power. We review and highlight recent progress in: a.) Detectors; b.) U-RadIT modalities; c.) Data and algorithms; and d.) Applications. Hardware-centric approaches to U-RadIT optimization are constrained by detector material properties, low signal-to-noise ratio, high cost and long development cycles of critical hardware components such as ASICs. Interpretation of experimental data, including comparisons with forward models, is frequently hindered by sparse measurements, model and measurement uncertainties, and noise. Alternatively, U-RadIT makes increasing use of data science and machine learning algorithms, including experimental implementations of compressed sensing. Machine learning and artificial intelligence approaches, refined by physics and materials information, may also contribute significantly to data interpretation, uncertainty quantification and U-RadIT optimization.Comment: 51 pages, 31 figures; Overview of ultrafast radiographic imaging and tracking as a part of ULITIMA 2023 conference, Mar. 13-16,2023, Menlo Park, CA, US

    Analog VLSI Circuits for Biosensors, Neural Signal Processing and Prosthetics

    Get PDF
    Stroke, spinal cord injury and neurodegenerative diseases such as ALS and Parkinson's debilitate their victims by suffocating, cleaving communication between, and/or poisoning entire populations of geographically correlated neurons. Although the damage associated with such injury or disease is typically irreversible, recent advances in implantable neural prosthetic devices offer hope for the restoration of lost sensory, cognitive and motor functions by remapping those functions onto healthy cortical regions. The research presented in this thesis is directed toward developing enabling technology for totally implantable neural prosthetics that could one day restore lost sensory, cognitive and motor function to the victims of debilitating neural injury or disease. There are three principal components to this work. First, novel integrated biosensors have been designed and implemented to transduce weak extra-cellular electrical potentials and optical signals from cells cultured directly on the surface of the sensor chips, as well as to manipulate cells on the surface of these chips. Second, a method of detecting and identifying stereotyped neural signals, or action potentials, has been mapped into silicon circuits which operate at very low power levels suitable for implantation. Third, as one small step towards the development of cognitive neural implants, a learning silicon synapse has been implemented and a neural network application demonstrated. The original contributions of this dissertation include: * A contact image sensor that adapts to background light intensity and can asynchronously detect statistically significant optical events in real-time; * Programmable electrode arrays for enhanced electrophysiological recording, for directing cellular growth, for site-specific in situ bio-functionalization, and for analyte and particulate collection; * Ultra-low power, programmable floating gate template matching circuits for the detection and classification of neural action potentials; * A two transistor synapse that exhibits spike timing dependent plasticity and can implement adaptive pattern classification and silicon learning

    Challenges and Solutions to Next-Generation Single-Photon Imagers

    Get PDF
    Detecting and counting single photons is useful in an increasingly large number of applications. Most applications require large formats, approaching and even far exceeding 1 megapixel. In this thesis, we look at the challenges of massively parallel photon-counting cameras from all performance angles. The thesis deals with a number of performance issues that emerge when the number of pixels exceeds about 1/4 of megapixels, proposing characterization techniques and solutions to mitigate performance degradation and non-uniformity. Two cameras were created to validate the proposed techniques. The first camera, SwissSPAD, comprises an array of 512 x 128 SPAD pixels, each with a one-bit memory and a gating mechanism to achieve 5ns high precision time windows with high uniformity across the array. With a massively parallel readout of over 10 Gigabit/s and positioning of the integration time window accurate to the pico-second range, fluorescence lifetime imaging and fluorescence correlation spectroscopy imaging achieve a speedup of several orders of magnitude while ensuring high precision in the measurements. Other possible applications include wide-field time-of-flight imaging and the generation of quantum random numbers at highest bit-rates. Lately super-resolution microscopy techniques have also used SwissSPAD. The second camera, LinoSPAD, takes the concepts of SwissSPAD one step further by moving even more 'intelligence' to the FPGA and reducing the sensor complexity to the bare minimum. This allows focusing the optimization of the sensor on the most important metrics of photon efficiency and fill factor. As such, the sensor consists of one line of SPADs that have a direct connection each to the FPGA where complex photon processing algorithms can be implemented. As a demonstration of the capabilities of current lowcost FPGAs we implemented an array of time-to-digital converters that can handle up to 8.5 billion photons per second, measuring each one of them and accounting them in high precision histograms. Using simple laser diodes and a circuit to generate light pulses in the picosecond range, we demonstrate a ubiquitous 3D time-of-flight sensor. The thesis intends to be a first step towards achieving the world's first megapixel SPAD camera, which, we believe, is in grasp thanks to the architectural and circuital techniques proposed in this thesis. In addition, we believe that the applications proposed in this thesis offer a wide variety of uses of the sensors presented in this thesis and in future ones to come

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Miniature high dynamic range time-resolved CMOS SPAD image sensors

    Get PDF
    Since their integration in complementary metal oxide (CMOS) semiconductor technology in 2003, single photon avalanche diodes (SPADs) have inspired a new era of low cost high integration quantum-level image sensors. Their unique feature of discerning single photon detections, their ability to retain temporal information on every collected photon and their amenability to high speed image sensor architectures makes them prime candidates for low light and time-resolved applications. From the biomedical field of fluorescence lifetime imaging microscopy (FLIM) to extreme physical phenomena such as quantum entanglement, all the way to time of flight (ToF) consumer applications such as gesture recognition and more recently automotive light detection and ranging (LIDAR), huge steps in detector and sensor architectures have been made to address the design challenges of pixel sensitivity and functionality trade-off, scalability and handling of large data rates. The goal of this research is to explore the hypothesis that given the state of the art CMOS nodes and fabrication technologies, it is possible to design miniature SPAD image sensors for time-resolved applications with a small pixel pitch while maintaining both sensitivity and built -in functionality. Three key approaches are pursued to that purpose: leveraging the innate area reduction of logic gates and finer design rules of advanced CMOS nodes to balance the pixel’s fill factor and processing capability, smarter pixel designs with configurable functionality and novel system architectures that lift the processing burden off the pixel array and mediate data flow. Two pathfinder SPAD image sensors were designed and fabricated: a 96 × 40 planar front side illuminated (FSI) sensor with 66% fill factor at 8.25μm pixel pitch in an industrialised 40nm process and a 128 × 120 3D-stacked backside illuminated (BSI) sensor with 45% fill factor at 7.83μm pixel pitch. Both designs rely on a digital, configurable, 12-bit ripple counter pixel allowing for time-gated shot noise limited photon counting. The FSI sensor was operated as a quanta image sensor (QIS) achieving an extended dynamic range in excess of 100dB, utilising triple exposure windows and in-pixel data compression which reduces data rates by a factor of 3.75×. The stacked sensor is the first demonstration of a wafer scale SPAD imaging array with a 1-to-1 hybrid bond connection. Characterisation results of the detector and sensor performance are presented. Two other time-resolved 3D-stacked BSI SPAD image sensor architectures are proposed. The first is a fully integrated 5-wire interface system on chip (SoC), with built-in power management and off-focal plane data processing and storage for high dynamic range as well as autonomous video rate operation. Preliminary images and bring-up results of the fabricated 2mm² sensor are shown. The second is a highly configurable design capable of simultaneous multi-bit oversampled imaging and programmable region of interest (ROI) time correlated single photon counting (TCSPC) with on-chip histogram generation. The 6.48μm pitch array has been submitted for fabrication. In-depth design details of both architectures are discussed

    A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors

    Get PDF
    Traditional frame-based technology continues to suffer from motion blur, low dynamic range, speed limitations and high data storage requirements. Event-based sensors offer a potential solution to these challenges. This research centers around a comparative assessment of frame and event-based object detection and tracking. A basic frame-based algorithm is used to compare against two different event-based algorithms. First event-based pseudo-frames were parsed through standard frame-based algorithms and secondly, target tracks were constructed directly from filtered events. The findings show there is significant value in pursuing the technology further

    Development of a low-cost multi-camera star tracker for small satellites

    Get PDF
    This thesis presents a novel small satellite star tracker that uses an array of low-cost, off the shelf imaging sensors to achieve high accuracy attitude determination performance. The theoretical analysis of improvements in star detectability achieved by stacking images from multiple cameras is presented. An image processing algorithm is developed to combine images from multiple cameras with arbitrary focal lengths, principal point offsets, distortions, and misalignments. The star tracker also implements other algorithms including the region growing algorithm, the intensity weighted centroid algorithm, the geometric voting algorithm for star identification, and the singular value decomposition algorithm for attitude determination. A star tracker software simulator is used to test the algorithms by generating star images with sensor noises, lens defocusing, and lens distortion. A hardware prototype is being assembled for eventual night sky testing to verify simulated performance levels. Star tracker flight hardware is being developed in the Laboratory for Advanced Space Systems at Illinois (LASSI) at the University of Illinois at Urbana Champaign for future CubeSat missions

    Program Annual Technology Report: Physics of the Cosmos Program Office

    Get PDF
    From ancient times, humans have looked up at the night sky and wondered: Are we alone? How did the universe come to be? How does the universe work? PCOS focuses on that last question. Scientists investigating this broad theme use the universe as their laboratory, investigating its fundamental laws and properties. They test Einsteins General Theory of Relativity to see if our current understanding of space-time is borne out by observations. They examine the behavior of the most extreme environments supermassive black holes, active galactic nuclei, and others and the farthest reaches of the universe, to expand our understanding. With instruments sensitive across the spectrum, from radio, through infrared (IR), visible light, ultraviolet (UV), to X rays and gamma rays, as well as gravitational waves (GWs), they peer across billions of light-years, observing echoes of events that occurred instants after the Big Bang. Last year, the LISA Pathfinder (LPF) mission exceeded expectations in proving the maturity of technologies needed for the Laser Interferometer Space Antenna (LISA) mission, and the Laser Interferometer Gravitational-Wave Observatory (LIGO) recorded the first direct measurements of long-theorized GWs. Another surprising recent discovery is that the universe is expanding at an ever-accelerating rate, the first hint of so-called dark energy, estimated to account for 75% of mass-energy in the universe. Dark matter, so called because we can only observe its effects on regular matter, is thought to account for another20%, leaving only 5% for regular matter and energy. Scientists now also search for special polarization in the cosmic microwave background to support the notion that in the split-second after the Big Bang, the universe inflated faster than the speed of light! The most exciting aspect of this grand enterprise today is the extraordinary rate at which we can harness technologies to enable these key discoveries

    Center for Space Microelectronics Technology. 1993 Technical Report

    Get PDF
    The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents. The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents
    • …
    corecore