45 research outputs found

    Correlations and Pair Formation in a Repulsively Interacting Fermi Gas

    Full text link
    A degenerate Fermi gas is rapidly quenched into the regime of strong effective repulsion near a Feshbach resonance. The spin fluctuations are monitored using speckle imaging and, contrary to several theoretical predictions, the samples remain in the paramagnetic phase for arbitrarily large scattering length. Over a wide range of interaction strengths a rapid decay into bound pairs is observed over times on the order of 10\hbar/E_F, preventing the study of equilibrium phases of strongly repulsive fermions. Our work suggests that a Fermi gas with strong short-range repulsive interactions does not undergo a ferromagnetic phase transition

    Spin-Orbit Coupling and Spin Textures in Optical Superlattices

    Get PDF
    We proposed and demonstrated a new approach for realizing spin orbit coupling with ultracold atoms. We use orbital levels in a double well potential as pseudospin states. Two-photon Raman transitions between left and right wells induce spin-orbit coupling. This scheme does not require near resonant light, features adjustable interactions by shaping the double well potential, and does not depend on special properties of the atoms. A pseudospinor Bose-Einstein condensate spontaneously acquires an antiferromagnetic pseudospin texture which breaks the lattice symmetry similar to a supersolid

    Speckle Imaging of Spin Fluctuations in a Strongly Interacting Fermi Gas

    Full text link
    Spin fluctuations and density fluctuations are studied for a two-component gas of strongly interacting fermions along the BEC-BCS crossover. This is done by in-situ imaging of dispersive speckle patterns. Compressibility and magnetic susceptibility are determined from the measured fluctuations. This new sensitive method easily resolves a tenfold suppression of spin fluctuations below shot noise due to pairing, and can be applied to novel magnetic phases in optical lattices

    Suppression of Density Fluctuations in a Quantum Degenerate Fermi Gas

    Full text link
    We study density profiles of an ideal Fermi gas and observe Pauli suppression of density fluctuations (atom shot noise) for cold clouds deep in the quantum degenerate regime. Strong suppression is observed for probe volumes containing more than 10,000 atoms. Measuring the level of suppression provides sensitive thermometry at low temperatures. After this method of sensitive noise measurements has been validated with an ideal Fermi gas, it can now be applied to characterize phase transitions in strongly correlated many-body systems.Comment: minor edit: fixed technical problem with arxiv's processing of .eps figur

    Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

    Full text link
    Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven operation and sparse activities. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices. Existing works adopt weight pruning to reduce SNN model size and accelerate inference. However, these methods mainly focus on how to obtain a sparse model for efficient inference, rather than training efficiency. To overcome these drawbacks, in this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity. Specifically, we design a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the LTH training cost on VGG-16 on CIFAR-10

    PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference

    Full text link
    The rapid growth and deployment of deep learning (DL) has witnessed emerging privacy and security concerns. To mitigate these issues, secure multi-party computation (MPC) has been discussed, to enable the privacy-preserving DL computation. In practice, they often come at very high computation and communication overhead, and potentially prohibit their popularity in large scale systems. Two orthogonal research trends have attracted enormous interests in addressing the energy efficiency in secure deep learning, i.e., overhead reduction of MPC comparison protocol, and hardware acceleration. However, they either achieve a low reduction ratio and suffer from high latency due to limited computation and communication saving, or are power-hungry as existing works mainly focus on general computing platforms such as CPUs and GPUs. In this work, as the first attempt, we develop a systematic framework, PolyMPCNet, of joint overhead reduction of MPC comparison protocol and hardware acceleration, by integrating hardware latency of the cryptographic building block into the DNN loss function to achieve high energy efficiency, accuracy, and security guarantee. Instead of heuristically checking the model sensitivity after a DNN is well-trained (through deleting or dropping some non-polynomial operators), our key design principle is to em enforce exactly what is assumed in the DNN design -- training a DNN that is both hardware efficient and secure, while escaping the local minima and saddle points and maintaining high accuracy. More specifically, we propose a straight through polynomial activation initialization method for cryptographic hardware friendly trainable polynomial activation function to replace the expensive 2P-ReLU operator. We develop a cryptographic hardware scheduler and the corresponding performance model for Field Programmable Gate Arrays (FPGA) platform

    AutoReP: Automatic ReLU Replacement for Fast Private Network Inference

    Full text link
    The growth of the Machine-Learning-As-A-Service (MLaaS) market has highlighted clients' data privacy and security issues. Private inference (PI) techniques using cryptographic primitives offer a solution but often have high computation and communication costs, particularly with non-linear operators like ReLU. Many attempts to reduce ReLU operations exist, but they may need heuristic threshold selection or cause substantial accuracy loss. This work introduces AutoReP, a gradient-based approach to lessen non-linear operators and alleviate these issues. It automates the selection of ReLU and polynomial functions to speed up PI applications and introduces distribution-aware polynomial approximation (DaPa) to maintain model expressivity while accurately approximating ReLUs. Our experimental results demonstrate significant accuracy improvements of 6.12% (94.31%, 12.9K ReLU budget, CIFAR-10), 8.39% (74.92%, 12.9K ReLU budget, CIFAR-100), and 9.45% (63.69%, 55K ReLU budget, Tiny-ImageNet) over current state-of-the-art methods, e.g., SNL. Morever, AutoReP is applied to EfficientNet-B2 on ImageNet dataset, and achieved 75.55% accuracy with 176.1 times ReLU budget reduction.Comment: ICCV 2023 accepeted publicatio
    corecore