83 research outputs found

    Integral invariants in flat superspace

    Get PDF
    We are solving for the case of flat superspace some homological problems that were formulated by Berkovits and Howe. (Our considerations can be applied also to the case of supertorus.) These problems arise in the attempt to construct integrals invariant with respect to supersymmetry. They appear also in other situations, in particular, in the pure spinor formalism in supergravity.Comment: 15 page

    Homology of Lie algebra of supersymmetries

    Full text link
    We study the homology and cohomology groups of super Lie algebra of supersymmetries and of super Poincare algebra. We discuss in detail the calculation in dimensions D=10 and D=6. Our methods can be applied to extended supersymmetry algebra and to other dimensions

    Weierstrass cycles and tautological rings in various moduli spaces of algebraic curves

    Full text link
    We analyze Weierstrass cycles and tautological rings in moduli space of smooth algebraic curves and in moduli spaces of integral algebraic curves with embedded disks with special attention to moduli spaces of curves having genus ≤6\leq 6. In particular, we show that our general formula gives a good estimate for the dimension of Weierstrass cycles for lower genera.Comment: arXiv admin note: substantial text overlap with arXiv:1207.053

    Homology of Lie algebra of supersymmetries and of super Poincare Lie algebra

    Full text link
    We study the homology and cohomology groups of super Lie algebra of supersymmetries and of super Poincare Lie algebra in various dimensions. We give complete answers for (non-extended) supersymmetry in all dimensions ≤11\leq 11. For dimensions D=10,11D=10,11 we describe also the cohomology of reduction of supersymmetry Lie algebra to lower dimensions. Our methods can be applied to extended supersymmetry algebra.Comment: New version with some additions and correction

    Cohomology ring of the BRST operator associated to the sum of two pure spinors

    Full text link
    In the study of the Type II superstring, it is useful to consider the BRST complex associated to the sum of two pure spinors. The cohomology of this complex is an infinite-dimensional vector space. It is also a finite-dimensional algebra over the algebra of functions of a single pure spinor. In this paper we study the multiplicative structure.Comment: 5 page

    Effects of Bentonite Activation Methods on Chitosan Loading Capacity

    Get PDF
    The adsorption capacity of bentonite clay for heavy metal removal from wastewater can be significantly enhanced by a high loading of chitosan on the surface. In order to enhance the chitosan loading, we tested activating bentonite clay by three methods prior to chitosan loading: sulfuric acid, calcination, and microwave treatments. Meanwhile, several parameters during chitosan loading, namely the initial chitosan concentration, stirring speed, reaction time, temperature, and pH value were investigated. Our results indicate that chitosan is attached to bentonite clay through intercalation and surface adsorption according to X-ray Diffraction (XRD), Scanning Eelectron Microscopy (SEM), and Fourier Transform Infrared Spectroscopy (FTIR) analyses. The maximum chitosan loading on 200-mesh raw bentonite clay (126.30 mg/L) was achieved under the following conditions: the initial chitosan concentration of 1000 mg/L, the stirring speed of 200 rpm, pH of 4.9, 60 min of reaction time, and temperature of 30 °C. The chitosan loading was further increased to 256.30, 233.70, and 208.83 mg/g, when using bentonite clay activated through 6 min of microwave irradiation (800 W), 10 % sulfuric acid treatment, and calcinations at 600 °C, respectively. When the chitosan loading was increased from 34.76 to 233.7 mg/g, the removal percentages of Cu(II), Cr(VI), and Pb(II) were improved, respectively from 78.90 to 95.5 %, from 82.22 to 98.74 %, from 60.09 to 86.18 %.

    S2SNet: A Pretrained Neural Network for Superconductivity Discovery

    Full text link
    Superconductivity allows electrical current to flow without any energy loss, and thus making solids superconducting is a grand goal of physics, material science, and electrical engineering. More than 16 Nobel Laureates have been awarded for their contribution to superconductivity research. Superconductors are valuable for sustainable development goals (SDGs), such as climate change mitigation, affordable and clean energy, industry, innovation and infrastructure, and so on. However, a unified physics theory explaining all superconductivity mechanism is still unknown. It is believed that superconductivity is microscopically due to not only molecular compositions but also the geometric crystal structure. Hence a new dataset, S2S, containing both crystal structures and superconducting critical temperature, is built upon SuperCon and Material Project. Based on this new dataset, we propose a novel model, S2SNet, which utilizes the attention mechanism for superconductivity prediction. To overcome the shortage of data, S2SNet is pre-trained on the whole Material Project dataset with Masked-Language Modeling (MLM). S2SNet makes a new state-of-the-art, with out-of-sample accuracy of 92% and Area Under Curve (AUC) of 0.92. To the best of our knowledge, S2SNet is the first work to predict superconductivity with only information of crystal structures. This work is beneficial to superconductivity discovery and further SDGs. Code and datasets are available in https://github.com/zjuKeLiu/S2SNetComment: Accepted to IJCAI 202

    E(2)E(2)-Equivariant Vision Transformer

    Full text link
    Vision Transformer (ViT) has achieved remarkable performance in computer vision. However, positional encoding in ViT makes it substantially difficult to learn the intrinsic equivariance in data. Initial attempts have been made on designing equivariant ViT but are proved defective in some cases in this paper. To address this issue, we design a Group Equivariant Vision Transformer (GE-ViT) via a novel, effective positional encoding operator. We prove that GE-ViT meets all the theoretical requirements of an equivariant neural network. Comprehensive experiments are conducted on standard benchmark datasets, demonstrating that GE-ViT significantly outperforms non-equivariant self-attention networks. The code is available at https://github.com/ZJUCDSYangKaifan/GEVit.Comment: Accept to UAI202
    • …
    corecore