169 research outputs found

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration

    Visualization and Quantitative Analysis of Reconstituted Tight Junctions Using Localization Microscopy

    Get PDF
    Tight Junctions (TJ) regulate paracellular permeability of tissue barriers. Claudins (Cld) form the backbone of TJ-strands. Pore-forming claudins determine the permeability for ions, whereas that for solutes and macromolecules is assumed to be crucially restricted by the strand morphology (i.e., density, branching and continuity). To investigate determinants of the morphology of TJ-strands we established a novel approach using localization microscopy

    Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models

    Get PDF
    Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information

    Single molecule localization microscopy of the distribution of chromatin using Hoechst and DAPI fluorescent probes

    Get PDF
    Several approaches have been described to fluorescently label and image DNA and chromatin in situ on the single-molecule level. These superresolution microscopy techniques are based on detecting optically isolated, fluorescently tagged anti-histone antibodies, fluorescently labeled DNA precursor analogs, or fluorescent dyes bound to DNA. Presently they suffer from various drawbacks such as low labeling efficiency or interference with DNA structure. In this report, we demonstrate that DNA minor groove binding dyes, such as Hoechst 33258, Hoechst 33342, and DAPI, can be effectively employed in single molecule localization microscopy (SMLM) with high optical and structural resolution. Upon illumination with low intensity 405 nm light, a small subpopulation of these molecules stochastically undergoes photoconversion from the original blue-emitting form to a green-emitting form. Using a 491 nm laser excitation, fluorescence of these green-emitting, optically isolated molecules was registered until “bleached”. This procedure facilitated substantially the optical isolation and localization of large numbers of individual dye molecules bound to DNA in situ, in nuclei of fixed mammalian cells, or in mitotic chromosomes, and enabled the reconstruction of high-quality DNA density maps. We anticipate that this approach will provide new insights into DNA replication, DNA repair, gene transcription, and other nuclear processes

    Neural network methods for radiation detectors and imaging

    Get PDF
    Recent advances in image data proccesing through deep learning allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware. This enables radiation experiments, which includes photon sciences in synchrotron and X-ray free electron lasers as a subclass, through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration (LA-UR-23-32395)

    High-Resolution Label Free Imaging of Endogenous Chromophore via Non-Linear Photoacoustic Microscopy

    Get PDF
    Molecular specific subcellular imaging of biological tissues is vital for understanding the mechanisms of various pathologies. Current technologies for subcellular absorption contrast imaging, such as fluorescence confocal microscopy, require exogenous contrast agents to gain access to relevant biomolecules. All non-fluorescing biomolecules must therefore be tagged by a fluorescent marker to be visible in fluorescence confocal images. While these markers are effective, they can change the local environments, and any exogenous contrast agent must first achieve FDA approval for wide-spread use in humans. Photoacoustic microscopy (PAM) is a hybrid imaging modality combining optical absorption imaging with ultrasonic detection capable of endogenous absorption contrast. Unfortunately, traditional photoacoustic microscopy suffers from poor axial resolution, precluding it from three-dimensional subcellular imaging. High axial resolution may be lent to PAM through the addition of a pump-probe spectroscopy technique known as transient absorption. This high resolution PAM technique, known as transient absorption ultrasonic microscopy (TAUM) enables three-dimensional subcellular imaging of endogenous biomolecules. The pump-probe spectroscopy properties inherent to TAUM provide optically resolved point spread functions, access to ground state recovery time, and access to transient absorption spectrum measurements. This manuscript describes the author’s efforts to improve the processing capabilities of both PAM and TAUM. In this manuscript various TAUM systems are designed and characterized in detail. A second generation TAUM system improves the processing speed of TAUM to enable processing in parallel with data acquisition. Following the improvements to processing, a novel optical schematic of TAUM is developed, greatly simplifying the design requirements of TAUM images. This system is validated by collecting volumetric images of erythrocytes in blood smears. This work enables any PAM system to be converted to a TAUM system through the addition of an optical modulator. The culmination of this work is a multispectral TAUM system hybridized with a confocal microscope to enable high resolution imaging with both scattering and absorption contrast of biological tissues. The capabilities of this PAM and TAUM are demonstrated by obtaining high resolution images of the endogenous chromophores: hemoglobin, melanin, and cytochrome C

    Hardware acceleration using FPGAs for adaptive radiotherapy

    Get PDF
    Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm.Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm
    corecore