9,716 research outputs found

    Berry phase modification to the energy spectrum of excitons

    Get PDF
    By quantizing the semiclassical motion of excitons, we show that the Berry curvature can cause an energy splitting between exciton states with opposite angular momentum. This splitting is determined by the Berry curvature flux through the k\bm k-space area spanned by the relative motion of the electron-hole pair in the exciton wave function. Using the gapped two-dimensional Dirac equation as a model, we show that this splitting can be understood as an effective spin-orbit coupling effect. In addition, there is also an energy shift caused by other "relativistic" terms. Our result reveals the limitation of the venerable hydrogenic model of excitons, and highlights the importance of the Berry curvature in the effective mass approximation.Comment: 4.5 pages, 2 figures, reference updated and minor change

    The Photometric Investigation of V921 Her using the Lunar-based Ultraviolet Telescope of Chang'e-3 mission

    Full text link
    The light curve of V921 Her in ultraviolet band observed by the Lunar-based Ultraviolet Telescope (LUT) is analyzed by the Wilson-Devinney code. Our solutions conclude that V921 Her is an early type marginal contact binary system with an additional close-in component. The binary system is under poor thermal contact with a temperature difference of nearly 700K700K between the two components. The close-in component contributes about 19 %19\,\% of the total luminosity in the triple system. Combining the radial velocity study together with our photometric solutions, the mass of the primary star and secondary one are calculated to be M1=1.784(Β±0.055)MβŠ™M_1 = 1.784(\pm0.055)M_\odot, M2=0.403(Β±0.012)MβŠ™M_2 = 0.403(\pm0.012)M_\odot. The evolutionary scenario of V921 Her is discussed. All times of light minimum of V921 Her available in the bibliography are taken into account and the Oβˆ’CO - C curve is analyzed for the first time. The most probable fitting results are discussed in the paper, which also confirm the existence of a third component (P3=10.2P_3=10.2 year) around the binary system. The period of V921 Her is also undergoing a continuously rapid increase at a rate of dP/dt=+2.79Γ—10βˆ’7dayβ‹…yearβˆ’1dP/dt=+2.79\times{10^{-7}}day\cdot year^{-1}, which may due to mass transfer from the less massive component to the more massive one

    Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously

    Full text link
    Availability attacks can prevent the unauthorized use of private data and commercial datasets by generating imperceptible noise and making unlearnable examples before release. Ideally, the obtained unlearnability prevents algorithms from training usable models. When supervised learning (SL) algorithms have failed, a malicious data collector possibly resorts to contrastive learning (CL) algorithms to bypass the protection. Through evaluation, we have found that most of the existing methods are unable to achieve both supervised and contrastive unlearnability, which poses risks to data protection. Different from recent methods based on contrastive error minimization, we employ contrastive-like data augmentations in supervised error minimization or maximization frameworks to obtain attacks effective for both SL and CL. Our proposed AUE and AAP attacks achieve state-of-the-art worst-case unlearnability across SL and CL algorithms with less computation consumption, showcasing prospects in real-world applications

    Game-Theoretic Unlearnable Example Generator

    Full text link
    Unlearnable example attacks are data poisoning attacks aiming to degrade the clean test accuracy of deep learning by adding imperceptible perturbations to the training samples, which can be formulated as a bi-level optimization problem. However, directly solving this optimization problem is intractable for deep neural networks. In this paper, we investigate unlearnable example attacks from a game-theoretic perspective, by formulating the attack as a nonzero sum Stackelberg game. First, the existence of game equilibria is proved under the normal setting and the adversarial training setting. It is shown that the game equilibrium gives the most powerful poison attack in that the victim has the lowest test accuracy among all networks within the same hypothesis space, when certain loss functions are used. Second, we propose a novel attack method, called the Game Unlearnable Example (GUE), which has three main gradients. (1) The poisons are obtained by directly solving the equilibrium of the Stackelberg game with a first-order algorithm. (2) We employ an autoencoder-like generative network model as the poison attacker. (3) A novel payoff function is introduced to evaluate the performance of the poison. Comprehensive experiments demonstrate that GUE can effectively poison the model in various scenarios. Furthermore, the GUE still works by using a relatively small percentage of the training data to train the generator, and the poison generator can generalize to unseen data well. Our implementation code can be found at https://github.com/hong-xian/gue

    Data-Dependent Stability Analysis of Adversarial Training

    Full text link
    Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms. Adversarial training is the most widely used defense against adversarial example attacks. However, previous generalization bounds for adversarial training have not included information regarding the data distribution. In this paper, we fill this gap by providing generalization bounds for stochastic gradient descent-based adversarial training that incorporate data distribution information. We utilize the concepts of on-average stability and high-order approximate Lipschitz conditions to examine how changes in data distribution and adversarial budget can affect robust generalization gaps. Our derived generalization bounds for both convex and non-convex losses are at least as good as the uniform stability-based counterparts which do not include data distribution information. Furthermore, our findings demonstrate how distribution shifts from data poisoning attacks can impact robust generalization
    • …
    corecore