248 research outputs found

    Focusing light through scattering media by transmission matrix inversion

    Get PDF
    Focusing light through scattering media has broad applications in optical imaging, manipulation and therapy. The contrast of the focus can be quantified by peak-to-background intensity ratio (PBR). Here, we theoretically and numerically show that by using a transmission matrix inversion method to achieve focusing, within a limited field of view and under a low noise condition in transmission matrix measurements, the PBR of the focus can be higher than that achieved by conventional methods such as optical phase conjugation or feedback-based wavefront shaping. Experimentally, using a phase-modulation spatial light modulator, we increase the PBR by 66% over that achieved by conventional methods based on phase conjugation. In addition, we demonstrate that, within a limited field of view and under a low noise condition in transmission matrix measurements, our matrix inversion method enables light focusing to multiple foci with greater fidelity than those of conventional methods

    Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

    Full text link
    Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remains relatively underdeveloped. In this work we advance the field of adversarial machine learning through experimentation and analyses of three important SNN security attributes. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique. Second, we analyze the transferability of adversarial examples generated by SNNs and other state-of-the-art architectures like Vision Transformers and Big Transfer CNNs. We demonstrate that SNNs are not often deceived by adversarial examples generated by Vision Transformers and certain types of CNNs. Lastly, we develop a novel white-box attack that generates adversarial examples capable of fooling both SNN models and non-SNN models simultaneously. Our experiments and analyses are broad and rigorous covering two datasets (CIFAR-10 and CIFAR-100), five different white-box attacks and twelve different classifier models

    Cooperative Spin Amplification

    Full text link
    Quantum amplification is recognized as a key resource for precision measurements. However, most conventional paradigms employ an ensemble of independent particles that usually limit the performance of quantum amplification in gain, spectral linewidth, etc. Here we demonstrate a new signal amplification using cooperative 129Xe nuclear spins embedded within a feedback circuit, where the noble-gas spin coherence time is enhanced by at least one order of magnitude. Using such a technique, magnetic field can be substantially pre-enhanced by more than three orders and is in situ readout with an embedded 87Rb magnetometer. We realize an ultrahigh magnetic sensitivity of 4.0 fT/Hz1/2^{1/2} that surpasses the photon-shot noise and even below the spin-projection noise of the embedded atomic magnetometer, allowing for exciting applications including searches for dark matter with sensitivity well beyond supernova constraints. Our findings extend the physics of quantum amplification to cooperative spin systems and can be generalized to a wide variety of existing sensors, enabling a new class of cooperative quantum sensors.Comment: 7 pages, 4 figure

    Web-Based Engine For Discovery Of Observations Using Landscape Units

    Full text link
    Investigations of natural resources processes realted to the water cycle are best studied using a commensurate landscape unit for the spatial extent of the process. Consequently, the capability to efficiently delineate the watershed extent along with the main hydrological characteristics and behavior of the river network and its drainage area is essential. The watershed search engine discussed in the present paper is designed to identify various observations acquired in the upstream drainage area of the watershed from a point specified by the user. The point can be selected on or outside the stream network using a web mapping interface. The discovered variables and attributes are those stored in the geodatabase associated with the application (e.g., stream flow gages, water quality observations points, weather stations, etc). The base map for the search engine is the National Hydrography Dataset Plus V2.0 (NHD Plus) and Geometric Network analysis are applied to develop the model on the GIS platform. In the application, the user input is defined as the point of interest for the search. Subsequently, the drainage area upstream from the point of interest is identified and visualized using mapping functions available in the NHD Plus library. Ancillary information provided by the NHD database and other relevant attributes of the data for the discovered point of observations are also provided. Given that the variety of activities in the drainage area upstream from the specified point of interests have direct impacts at the location of interest the engine would enhance the information available for efficiently documenting various aspects of water quantity and quality. The application holds promise to benefit users pertaining to watershed management communities and watershed resources researchers

    Distilling Temporal Knowledge with Masked Feature Reconstruction for 3D Object Detection

    Full text link
    Striking a balance between precision and efficiency presents a prominent challenge in the bird's-eye-view (BEV) 3D object detection. Although previous camera-based BEV methods achieved remarkable performance by incorporating long-term temporal information, most of them still face the problem of low efficiency. One potential solution is knowledge distillation. Existing distillation methods only focus on reconstructing spatial features, while overlooking temporal knowledge. To this end, we propose TempDistiller, a Temporal knowledge Distiller, to acquire long-term memory from a teacher detector when provided with a limited number of frames. Specifically, a reconstruction target is formulated by integrating long-term temporal knowledge through self-attention operation applied to feature teachers. Subsequently, novel features are generated for masked student features via a generator. Ultimately, we utilize this reconstruction target to reconstruct the student features. In addition, we also explore temporal relational knowledge when inputting full frames for the student model. We verify the effectiveness of the proposed method on the nuScenes benchmark. The experimental results show our method obtain an enhancement of +1.6 mAP and +1.1 NDS compared to the baseline, a speed improvement of approximately 6 FPS after compressing temporal knowledge, and the most accurate velocity estimation

    Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

    Full text link
    Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven operation and sparse activities. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices. Existing works adopt weight pruning to reduce SNN model size and accelerate inference. However, these methods mainly focus on how to obtain a sparse model for efficient inference, rather than training efficiency. To overcome these drawbacks, in this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity. Specifically, we design a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the LTH training cost on VGG-16 on CIFAR-10
    corecore