743 research outputs found
Recommended from our members
Implantable Fluorescence Imager for Deep Neuronal Imaging
This thesis describes the design, fabrication, and characterization of the Implantable Fluorescence Imager (IFI): a camera chip with a needle-like form factor designed for imaging neuronal activity in the deep brain. It is fabricated with a complementary metal oxide semiconductor (CMOS) process, allowing for hundreds or thousands of single- photon-sensitive photodetectors to be densely packed onto a device width comparable to a single-channel fiber optic cannula (~100 μm). The IFI uses a combination of spectral and temporal filters as a fluorescence emission filter, and per-pixel Talbot gratings for 3D light-field imaging.
The IFI has the potential to overcome the imaging depth limit of multi-photon microscopes imposed by the scattering and absorption of photons in brain tissue, and the resolution limit of noninvasive imaging techniques, such as functional magnetic resonance imaging and photoacoustic imaging. It competes with graded index lens-based miniaturized microscopes in imaging depth, but offers several comparative advantages. First, its cross sectional area is at least an order of magnitude smaller for an equal field of view. Second, the distribution of pixels along its entire length allows the study of multi- layer or multi-region dynamics. Finally, the scalability advantage of silicon integrated circuit technology in system miniaturization and data bandwidth may allow thousands of such imaging shanks to be simultaneously deployed for large-scale volumetric recording
Fast fluorescence lifetime imaging and sensing via deep learning
Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption
Fluorescence lifetime imaging with a megapixel SPAD camera and neural network lifetime estimation
Fluorescence lifetime imaging microscopy (FLIM) is a key technology that provides direct insight into cell metabolism, cell dynamics and protein activity. However, determining the lifetimes of different fluorescent proteins requires the detection of a relatively large number of photons, hence slowing down total acquisition times. Moreover, there are many cases, for example in studies of cell collectives, where wide-field imaging is desired. We report scan-less wide-field FLIM based on a 0.5 MP resolution, time-gated Single Photon Avalanche Diode (SPAD) camera, with acquisition rates up to 1 Hz. Fluorescence lifetime estimation is performed via a pre-trained artificial neural network with 1000-fold improvement in processing times compared to standard least squares fitting techniques. We utilised our system to image HT1080—human fibrosarcoma cell line as well as Convallaria. The results show promise for real-time FLIM and a viable route towards multi-megapixel fluorescence lifetime images, with a proof-of-principle mosaic image shown with 3.6 MP
Low-Rate Smartphone Videoscopy for Microsecond Luminescence Lifetime Imaging with Machine Learning
Time-resolved techniques have been widely used in time-gated and luminescence
lifetime imaging. However, traditional time-resolved systems require expensive
lab equipment such as high-speed excitation sources and detectors or
complicated mechanical choppers to achieve high repetition rates. Here, we
present a cost-effective and miniaturized smartphone lifetime imaging system
integrated with a pulsed UV LED for 2D luminescence lifetime imaging using a
videoscopy-based virtual chopper (V-chopper) mechanism combined with machine
learning. The V-chopper method generates a series of time-delayed images
between excitation pulses and smartphone gating so that the luminescence
lifetime can be measured at each pixel using a relatively low acquisition frame
rate (e.g., 30 fps) without the need for excitation synchronization. Europium
(Eu) complex dyes with different luminescent lifetimes ranging from
microseconds to seconds were used to demonstrate and evaluate the principle of
V-chopper on a 3D-printed smartphone microscopy platform. A convolutional
neural network (CNN) model was developed to automatically distinguish the gated
images in different decay cycles with an accuracy of >99.5%. The current
smartphone V-chopper system can detect lifetime down to ~75 microseconds
utilizing the default phase shift between the smartphone video rate and
excitation pulses and in principle can detect much shorter lifetimes by
accurately programming the time delay. This V-chopper methodology has
eliminated the need for the expensive and complicated instruments used in
traditional time-resolved detection and can greatly expand the applications of
time-resolved lifetime technologies
Time domain functional NIRS imaging for human brain mapping
AbstractThis review is aimed at presenting the state-of-the-art of time domain (TD) functional near-infrared spectroscopy (fNIRS). We first introduce the physical principles, the basics of modeling and data analysis. Basic instrumentation components (light sources, detection techniques, and delivery and collection systems) of a TD fNIRS system are described. A survey of past, existing and next generation TD fNIRS systems used for research and clinical studies is presented. Performance assessment of TD fNIRS systems and standardization issues are also discussed. Main strengths and weakness of TD fNIRS are highlighted, also in comparison with continuous wave (CW) fNIRS. Issues like quantification of the hemodynamic response, penetration depth, depth selectivity, spatial resolution and contrast-to-noise ratio are critically examined, with the help of experimental results performed on phantoms or in vivo. Finally we give an account on the technological developments that would pave the way for a broader use of TD fNIRS in the neuroimaging community
Miniaturized Optical Probes for Near Infrared Spectroscopy
RÉSUMÉ L’étude de la propagation de la lumière dans des milieux hautement diffus tels que les tissus biologiques (imagerie optique diffuse) est très attrayante, car elle offre la possibilité d’explorer de manière non invasive le milieu se trouvant profondément sous la surface, et de retrouver des informations sur l’absorption (liée à la composition chimique) et sur la diffusion (liée à la microstructure). Dans la gamme spectrale 600-1000 nm, également appelée gamme proche infrarouge (NIR en anglais), l'atténuation de la lumière par le tissu biologique (eau, lipides et hémoglobine) est relativement faible, ce qui permet une pénétration de plusieurs centimètres dans le tissu. En spectroscopie proche infrarouge (NIRS en anglais), de photons sont injectés dans les tissus et le signal émis portant des informations sur les constituants tissulaires est mesuré. La mesure de très faibles signaux dans la plage de longueurs d'ondes visibles et proche infrarouge avec une résolution temporelle de l'ordre de la picoseconde s'est révélée une technique efficace pour étudier des tissus biologiques en imagerie cérébrale fonctionnelle, en mammographie optique et en imagerie moléculaire, sans parler de l'imagerie de la durée de vie de fluorescence, la spectroscopie de corrélation de fluorescence, informations quantiques et bien d’autres. NIRS dans le domaine temporel (TD en anglais) utilise une source de lumière pulsée, généralement un laser fournissant des impulsions lumineuses d'une durée de quelques dizaines de picosecondes, ainsi qu'un appareil de détection avec une résolution temporelle inférieure à la nanoseconde. Le point essentiel de ces mesures est la nécessité d’augmenter la sensibilité pour de plus grandes profondeurs d’investigation, en particulier pour l’imagerie cérébrale fonctionnelle, où la peau, le crâne et le liquide céphalo-rachidien (LCR) masquent fortement le signal cérébral.
À ce jour, l'adoption plus large de ces techniques optique non invasives de surveillance est surtout entravée par les composants traditionnels volumineux, coûteux, complexes et fragiles qui ont un impact significatif sur le coût et la dimension de l’ensemble du système. Notre objectif est de développer une sonde NIRS compacte et miniaturisée, qui peut être directement mise en contact avec l'échantillon testé pour obtenir une haute efficacité de détection des photons diffusés, sans avoir recours à des fibres et des lentilles encombrantes pour l'injection et la collection de la lumière. Le système proposé est composé de deux parties: i) une unité d’émission de lumière pulsée et ii) un module de détection à photon unique qui peut être activé et désactivé rapidement. L'unité d'émission de lumière utilisera une source laser pulsée à plus de 80 MHz avec une largeur d'impulsion de picoseconde.----------ABSTRACT
The study of light propagation into highly diffusive media like biological tissues (Diffuse Optical Imaging) is highly appealing due to the possibility to explore the medium non-invasively, deep beneath the surface and to recover information both on absorption (related to chemical composition) and on scattering (related to microstructure). In the 600–1000 nm spectral range also known as near-infrared (NIR) range, light attenuation by the biological tissue constituents (i.e. water, lipid, and hemoglobin) is relatively low and allows for penetration through several centimeters of tissue. In near-infrared spectroscopy (NIRS), a light signal is injected into the tissues and the emitted signal carrying information on tissue constituents is measured. The measurement of very faint light signals in the visible and near-infrared wavelength range with picosecond timing resolution has proven to be an effective technique to study biological tissues in functional brain imaging, optical mammography and molecular imaging, not to mention fluorescence lifetime imaging, fluorescence correlation spectroscopy, quantum information and many others. Time Domain (TD) NIRS employs a pulsed light source, typically a laser providing light pulses with duration of a few tens of picoseconds, and a detection circuit with temporal resolution in the sub-nanosecond scale. The key point of these measurements is the need to increase the sensitivity to higher penetration depths of investigation, in particular for functional brain imaging, where skin, skull, and cerebrospinal fluid (CSF) heavily mask the brain signal.
To date, the widespread adoption of the non-invasive optical monitoring techniques is mainly hampered by the traditional bulky, expensive, complex and fragile components which significantly impact the overall cost and dimension of the system. Our goal is the development of a miniaturized compact NIRS probe, that can be directly put in contact with the sample under test to obtain high diffused photon harvesting efficiency without the need for cumbersome optical fibers and lenses for light injection and collection. The proposed system is composed of two parts namely; i) pulsed light emission unit and ii) gated single-photon detection module. The light emission unit will employ a laser source pulsed at over 80MHz with picosecond pulse width generator embedded into the probe along with the light detection unit which comprises single-photon detectors integrated with other peripheral control circuitry. Short distance source and detector pairing, most preferably on a single chip has the potential to greatly expedites the traditional method of portable brain imaging
A 64x64 SPAD array for portable colorimetric sensing, fluorescence and X-ray imaging
We present the design and application of a 64x64 pixel SPAD array to portable colorimetric sensing, and fluorescence and x-ray imaging. The device was fabricated on an unmodified 180 nm CMOS process and is based on a square p+/n active junction SPAD geometry suitable for detecting green fluorescence emission. The stand-alone SPAD shows a photodetection probability greater than 60% at 5 V excess bias, with a dark count rate of less than 4 cps/µm2 and sub-ns timing jitter performance. It has a global shutter with an in-pixel 8-bit counter; four 5-bit decoders and two 64-to-1 multiplexer blocks allow the data to be read-out. The array of sensors was able to detect fluorescence from a fluorescein isothiocyanate (FITC) solution down to a concentration of 900 pM with a SNR of 9.8 dB. A colorimetric assay was performed on top of the sensor array with a limit of quantification of 3.1 µM. X-rays images, using energies ranging from 10 kVp to 100 kVp, of a lead grating mask were acquired without using a scintillation crystal
Cmos Based Lensless Imaging Systems And Support Circuits
While much progress has been made in various fields of study in past few decades, leading to better understanding of science as well as better quality of life, the role of optical sensing has grown among electrical, chemical, optical, and other physical signal modalities. As an example, fluorescent microscopy has become one of the most important methods in the modern biology. However, broader implementation of optical sensing has been limited due to the expensive and bulky optical and mechanical components of conventional optical sensor systems. To address such bottleneck, this dissertation presents several cost-effective, compact approaches of optical sensor arrays based on solid state devices that can replace the conventional components. As an example, in chapter 2 we demonstrate a chip-scale (<1 mm2 ) sensor, the Planar Fourier Capture Array (PFCA), capable of imaging the far-field without any off-chip optics. The PFCA consists of an array of angle-sensitive pixels manufactured in a standard semiconductor process, each of which reports one component of a spatial two-dimensional (2D) Fourier transform of the local light field. Thus, the sensor directly captures 2D Fourier transforms of scenes. The effective resolution of our prototype is approximately 400 pixels. My work on this project [15] includes a circuit design and layout and the overall testing of the imaging system. In chapter 3 we present a fully integrated, Single Photon Avalanche Detector (SPAD) using only standard low- voltage (1.8V) CMOS devices in a 0.18m process. The system requires one highvoltage AC signal which alternately reverse biases the SPADs into avalanche breakdown and then resets with a forward bias. The proposed self-quenching circuit intrinsically suppresses after-pulse effects, improving signal to noise ratio while still permitting fine time resolution. The required high-voltage AC signal can be generated by resonant structures and can be shared across arrays of SPADs [24]. An ideal light sensor to provide the precise incident intensity, location, and angle of incoming photons is shown in chapter 4. Single photon avalanche diodes (SPADs) provide such desired high (single photon) sensitivity with precise time information, and can be implemented at a pixel scale to form an array to extract spatial information. Furthermore, recent work has demonstrated photodiode-based structures (combined with micro-lenses and diffraction gratings) that are capable of encoding both spatial and angular information of the incident light. In this chapter, we describe the implementation of such grating structure on SPAD to realize a pixel-scale angle-sensitive single photon avalanche diode (A-SPAD) using a standard CMOS process. While the underlying SPAD structure provides the high sensitivity, the diffraction gratings consisting of two sets of metal layers offers the angle-sensitivity. Such unique combination of the SPAD and the diffraction gratings expand the sensing dimensions to pave a path towards a lens-less 3-D imaging and a light-field timeof-flight imaging. In chapter 5, we present a 72 x 60, angle-sensitive single photon avalanche diode (A-SPAD) array for lens-less 3-D fluorescent life time imaging. A-SPAD pixels are comprised of (1) a SPAD to resolve precise timing information, to reject high-powered UV stimulus, and to map the lifetimes of different fluorescent sources and (2) integrated diffraction gratings on top of the SPAD to extract incident angles of incoming light, enabling 3-D localization at a micrometer scale. The chip presented in this work also integrates pixel-level counters as well as shared timing circuitry, and is implemented in conventional 180nm CMOS technology without any post-processing. Contact-based read- out from a revolving MEMS accelerometers is problematic therefore contactless (optical) read-out is preferred. The optical readout requires an image sensor to resolve nanometer-scale shifts of the MEMS image. Traditional imagers record on a rectangular grid which is not well-suited for efficiently imaging rotating objects due to the significant processing overhead required to translate Cartesian coordinates to angular position. Therefore, in chapter 6 we demonstrate a high-speed ( 1kfps), circular, CMOS imaging array for contact-less, optical measurement of rotating inertial sensors. The imager is designed for real-time optical readout and calibration of a MEMS accelerometer revolving at greater than 1000rpm. The imager uses a uniform circular arrangement of pixels to enable rapid imaging of rotational objects. Furthermore, each photodiode itself is circular to maintain uniform response throughout the entire revolution. Combining a high frame rate and a uniform response to motion, the imager can achieve sub-pixel resolution (25nm) of the displacement of micro scale features. In order to avoid fixed pattern noise arising from non-uniform routing within the array we implemented a new global shutter technique that is insensitive to parasitic capacitance. To ease integration with various MEMS platforms, the system has SPI control, on-chip bias generation, sub-array imaging, and digital data read-out. My work on this project [20] includes a circuit design and lay- out and some testing including, a FPGA based controller design of the imaging system. In the previous chapters, compact and cost effective imaging sys- tems have been introduced. Those imaging systems show great potential for wireless implantable systems. A power rectifier for the implant provides a volt- age DC power with a small inductor, for small volume, from a small AC voltage input. In the last chapter we demonstrate an inductively powered, orthogonal current-reuse multi-channel amplifier for power-efficient neural recording. The power rectifier uses the input swing as a self-synchronous charge pump, making it a fully passive, full-wave ladder rectifier. The rectifier supplies 10.37[MICRO SIGN]W at 1.224V to the multi-channel amplifier, which includes bias generation. The prototype device is fabricated in a TSMC 65nm CMOS process, with an active area of 0.107mm2 . The maximum measured power conversion efficiency (PCE) is 16.58% with a 184mV input amplitude. My work on this project [25] in- cludes the rectifier design and overall testing to combine "orthogonal currentreuse neural amplifier" designed by Ben Johnson
Deep Learning in Single-Cell Analysis
Single-cell technologies are revolutionizing the entire field of biology. The
large volumes of data generated by single-cell technologies are
high-dimensional, sparse, heterogeneous, and have complicated dependency
structures, making analyses using conventional machine learning approaches
challenging and impractical. In tackling these challenges, deep learning often
demonstrates superior performance compared to traditional machine learning
methods. In this work, we give a comprehensive survey on deep learning in
single-cell analysis. We first introduce background on single-cell technologies
and their development, as well as fundamental concepts of deep learning
including the most popular deep architectures. We present an overview of the
single-cell analytic pipeline pursued in research applications while noting
divergences due to data sources or specific applications. We then review seven
popular tasks spanning through different stages of the single-cell analysis
pipeline, including multimodal integration, imputation, clustering, spatial
domain identification, cell-type deconvolution, cell segmentation, and
cell-type annotation. Under each task, we describe the most recent developments
in classical and deep learning methods and discuss their advantages and
disadvantages. Deep learning tools and benchmark datasets are also summarized
for each task. Finally, we discuss the future directions and the most recent
challenges. This survey will serve as a reference for biologists and computer
scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi
- …