161 research outputs found

    Electric field excitation suppression in cold atoms

    Full text link
    In this article, the atom excitation suppression is studied in two ways. The first way of exploring the excitation suppression is by an external DC electric field. The second way is to study the excitation suppression caused by electric field generated by free charges, which are created by ionizing atoms. This suppression is called Coulomb blockade. Here the Coulomb forces are created by ions through ionizing atoms by a UV laser. The theory shows that the interaction, which causes the suppression, is primarily caused by charge-dipole interactions. Here the charge is the ion, and the dipole is an atom. In this experiment, we use 85^{85}Rb atoms. The valence electron and the ion core are the two poles of an electric dipole. The interaction potential energy between the ion and the atom is proportional to 1R2\frac{1}{R^2}, and the frequency shift caused by this interaction is proportional to 1R4\frac{1}{R^4}, where RR is the distance between the ion and the dipole considered. This research can be used for quantum information storage, remote control, creating hot plasmas using cold atoms, as well as electronic devices.Comment: 12 pages, 7 figure

    The Density Broadening in a Sodium F=2 Condensate Detected by a Pulse Train

    Get PDF
    The dipole-blockaded sodiumclock transition has been detected by high resolution microwave spectroscopy, the multiple-pulse spectroscopy. This spectroscopic technique has been first used to detect the density broadening and shifting in a Sodium Bose Einstein Condensate (BEC) by probing the sodium clock-transition. Moreover, by narrowing the pulse-width of the pulses, some of the broadening mechanisms can be partially reduced. The results reported here are essential steps toward the ground-statequantum computing, few-body spectroscopy, spin squeezing and quantum metrology

    MHz Few-body Frequency Shift Detected in a Cold 85Rb Rydberg Gas

    Get PDF
    We have observed a density-dependent frequency shift of more than 4 MHz in a cold 85Rb Rydberg gas trapped in a magneto-optical trap. A one-dimensional linearly aligned four-body model is proposed to explain the experimental data, and the calculation matches the experimental data. The calculation also shows that if the energy detuning between the two coupled states, the nsnsns(n + 1)s and nsnsnpnp states in this case, is small, the lowest level of the nsnsnpnp manifold has the maximum mixing probability, causing a frequency shift instead of line broadening. The results reported may be used for few-body blockade, Rydberg single-atom imaging, studying few-body to many-body transitions and interactions, and few-body ionization as well as quantum metrology

    Evidence of quadrupole-quadrupole interactions in ultracold gases

    Full text link
    Van der Waals interactions are interactions between dipoles. Similarly, quadrupole-quadrupole interactions are interactions between quadrupoles. In this article, we focus on the interactions between two dipoles or two quadrupoles. Classically, we treat one Rydberg atom as a dipole; an outer excited electron and an ion core are the two poles of a dipole. Quantum mechanically, we consider Rydberg transition dipoles. Therefore, dipole-dipole interactions are the interactions between two Rydberg atoms. Rydberg atoms have quadrupole components; consequently, the interactions between two Rydberg atoms have quadrupole-quadrupole interaction components. In this article, we examine the dipole-dipole and quadrupole-quadrupole contribution to the interactions between ultracold Rydberg atoms. It is shown that the evidence of quadrupole-blockade has been observed, which is essential for fabricating more compact quantum computers, quantum electronics, as well as quantum sensing

    Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

    Full text link
    Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: https://github.com/tmlr-group/Unleashing-Mask.Comment: accepted by ICML 202

    Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation

    Full text link
    Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications. Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers. However, previous methods assume that the collected outliers can be sufficiently large and representative to cover the boundary between ID and OOD data, which might be impractical and challenging. In this work, we propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers. Specifically, DivOE introduces a new learning objective, which diversifies the auxiliary distribution by explicitly synthesizing more informative outliers for extrapolation during training. It leverages a multi-step optimization method to generate novel outliers beyond the original ones, which is compatible with many variants of outlier exposure. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed DivOE. The code is publicly available at: https://github.com/tmlr-group/DivOE.Comment: accepted by NeurIPS 202

    COSST: Multi-organ Segmentation with Partially Labeled Datasets Using Comprehensive Supervisions and Self-training

    Full text link
    Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes
    • …
    corecore