339 research outputs found

    Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification

    Get PDF
    Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.R01 CA224911 - NCI NIH HHS; R01 CA232015 - NCI NIH HHS; R01 NS108464 - NINDS NIH HHS; R21 EY029412 - NEI NIH HHSAccepted manuscrip

    EFFECTS OF DRIVERLESS VEHICLES ON THE COMPETITIVENESS OF BUS TRANSIT SERVICES

    Get PDF
    The advent of driverless vehicles, including automobiles and buses, may considerably affect the competitiveness and ridership of public transportation services in negative as well as positive ways. Since driverless vehicles may be widely used in the fairly near future, public transit operators and transportation planners should prepare to deal with their anticipated effects. In this thesis the author (1) formulate modular optimization models for both human-driven and automated bus services with fixed routes as well as flexible routes, (2) develop preliminary quantitative assessments of those effects, showing that without drivers, competitiveness of public transportation compared to private transportation decreases; (3) conduct sensitivity analyses to explore how changes in input parameters affect the results; and (4) identify insights in which transit operators, transportation planners and other transportation system stakeholders may use in effectively adapting to the introduction of driverless vehicles

    DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization

    Full text link
    DL compiler's primary function is to translate DNN programs written in high-level DL frameworks such as PyTorch and TensorFlow into portable executables. These executables can then be flexibly executed by the deployed host programs. However, existing DL compilers rely on a tracing mechanism, which involves feeding a runtime input to a neural network program and tracing the program execution paths to generate the computational graph necessary for compilation. Unfortunately, this mechanism falls short when dealing with modern dynamic neural networks (DyNNs) that possess varying computational graphs depending on the inputs. Consequently, conventional DL compilers struggle to accurately compile DyNNs into executable code. To address this limitation, we propose \tool, a general approach that enables any existing DL compiler to successfully compile DyNNs. \tool tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process. Specifically, \tool develops program analysis and program transformation techniques to convert a dynamic neural network into multiple sub-neural networks. Each sub-neural network is devoid of conditional statements and is compiled independently. Furthermore, \tool synthesizes a host module that models the control flow of the DyNNs and facilitates the invocation of the sub-neural networks. Our evaluation demonstrates the effectiveness of \tool, achieving a 100\% success rate in compiling all dynamic neural networks. Moreover, the compiled executables generated by \tool exhibit significantly improved performance, running between 1.12×1.12\times and 20.21×20.21\times faster than the original DyNNs executed on general-purpose DL frameworks.Comment: This paper has been accepted to ISSTA 202

    Flexible coherent control of plasmonic spin-Hall effect

    Get PDF
    The surface plasmon polariton is an emerging candidate for miniaturizing optoelectronic circuits. Recent demonstrations of polarization-dependent splitting using metasurfaces, including focal-spot shifting and unidirectional propagation, allow us to exploit the spin degree of freedom in plasmonics. However, further progress has been hampered by the inability to generate more complicated and independent surface plasmon profiles for two incident spins, which work coherently together for more flexible and tunable functionalities. Here by matching the geometric phases of the nano-slots on silver to specific superimpositions of the inward and outward surface plasmon profiles for the two spins, arbitrary spin-dependent orbitals can be generated in a slot-free region. Furthermore, motion pictures with a series of picture frames can be assembled and played by varying the linear polarization angle of incident light. This spin-enabled control of orbitals is potentially useful for tip-free near-field scanning microscopy, holographic data storage, tunable plasmonic tweezers, and integrated optical components

    Spin-dependent optics with metasurfaces

    Get PDF

    Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling

    Full text link
    Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational costs. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of cross-attention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm MixEncoder for efficient sentence pair modeling. MixEncoder involves a light-weight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our MixEncoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.Comment: Accepted to EMNLP 202
    • …
    corecore