213 research outputs found

    Tamely ramified geometric Langlands correspondence in positive characteristic

    Full text link
    We prove a version of the tamely ramified geometric Langlands correspondence in positive characteristic for GLn(k)GL_n(k). Let kk be an algebraically closed field of characteristic p>np> n. Let XX be a smooth projective curve over kk with marked points, and fix a parabolic subgroup of GLn(k)GL_n(k) at each marked point. We denote by Bunn,P\text{Bun}_{n,P} the moduli stack of (quasi-)parabolic vector bundles on XX, and by Locn,P\mathcal{L}oc_{n,P} the moduli stack of parabolic flat connections such that the residue is nilpotent with respect to the parabolic reduction at each marked point. We construct an equivalence between the bounded derived category Db(Qcoh(Locn,P0))D^{b}(\text{Qcoh}({\mathcal{L}oc_{n,P}^{0}})) of quasi-coherent sheaves on an open substack Locn,P0Locn,P\mathcal{L}oc_{n,P}^{0}\subset\mathcal{L}oc_{n,P}, and the bounded derived category Db(DBunn,P0-mod)D^{b}(\mathcal{D}^{0}_{{\text{Bun}}_{n,P}}\text{-mod}) of DBunn,P0\mathcal{D}^{0}_{{\text{Bun}}_{n,P}}-modules, where DBunn,P0\mathcal{D}^0_{\text{Bun}_{n,P}} is a localization of DBunn,P\mathcal{D}_{\text{Bun}_{n,P}} the sheaf of crystalline differential operators on Bunn,P\text{Bun}_{n,P}. Thus we extend the work of Bezrukavnikov-Braverman to the tamely ramified case. We also prove a correspondence between flat connections on XX with regular singularities and meromorphic Higgs bundles on the Frobenius twist X(1)X^{(1)} of XX with first order poles .Comment: 34 pages. Minor corrections, more expository material adde

    On the Kirwan map for moduli of Higgs bundles

    Full text link
    Let CC be a smooth complex projective curve and GG a connected complex reductive group. We prove that if the center Z(G)Z(G) of GG is disconnected, then the Kirwan map H(Bun(G,C),Q)H(MHiggsss,Q)H^*\big(\operatorname{Bun}(G,C),\mathbb{Q}\big)\rightarrow H^*\big(\mathcal{M}_{\operatorname{Higgs}}^{\operatorname{ss}},\mathbb{Q}\big) from the cohomology of the moduli stack of GG-bundles to the moduli stack of semistable GG-Higgs bundles, fails to be surjective: more precisely, the "variant cohomology" (and variant intersection cohomology) of the stack MHiggsss\mathcal{M}_{\operatorname{Higgs}}^{\operatorname{ss}} of semistable GG-Higgs bundles, is always nontrivial. We also show that the image of the pullback map H(MHiggsss,Q)H(MHiggsss,Q)H^*\big(M_{\operatorname{Higgs}}^{\operatorname{ss}},\mathbb{Q}\big)\rightarrow H^*\big(\mathcal{M}_{\operatorname{Higgs}}^{\operatorname{ss}},\mathbb{Q}\big), from the cohomology of the moduli space of semistable GG-Higgs bundles to the stack of semistable GG-Higgs bundles, cannot be contained in the image of the Kirwan map. The proof uses a Borel-Quillen--style localization result for equivariant cohomology of stacks to reduce to an explicit construction and calculation

    Social Inclusion of Smart Transportation:Case of Shanghai

    Get PDF
    Master of Science i global ledelse - Nord universitet 202

    Synthesis of Core-Shell @@ Microspheres and Their Application as Recyclable Photocatalysts

    Get PDF
    We report the fabrication of core-shell Fe3O4@SiO2@TiO2 microspheres through a wet-chemical approach. The Fe3O4@SiO2@TiO2 microspheres possess both ferromagnetic and photocatalytic properties. The TiO2 nanoparticles on the surfaces of microspheres can degrade organic dyes under the illumination of UV light. Furthermore, the microspheres are easily separated from the solution after the photocatalytic process due to the ferromagnetic Fe3O4 core. The photocatalysts can be recycled for further use with slightly lower photocatalytic efficiency

    High-Throughput GPU Implementation of Dilithium Post-Quantum Digital Signature

    Full text link
    In this work, we present a well-optimized GPU implementation of Dilithium, one of the NIST post-quantum standard digital signature algorithms. We focus on warp-level design and exploit several strategies to improve performance, including memory pool, kernel fusing, batching, streaming, etc. All the above efforts lead to an efficient and high-throughput solution. We profile on both desktop and server-grade GPUs, and achieve up to 57.7×\times, 93.0×\times, and 63.1×\times higher throughput on RTX 3090Ti for key generation, signing, and verification, respectively, compared to single-thread CPU. Additionally, we study the performance in real-world applications to demonstrate the effectiveness and applicability of our solution

    Bayesian Domain Invariant Learning via Posterior Generalization of Parameter Distributions

    Full text link
    Domain invariant learning aims to learn models that extract invariant features over various training domains, resulting in better generalization to unseen target domains. Recently, Bayesian Neural Networks have achieved promising results in domain invariant learning, but most works concentrate on aligning features distributions rather than parameter distributions. Inspired by the principle of Bayesian Neural Network, we attempt to directly learn the domain invariant posterior distribution of network parameters. We first propose a theorem to show that the invariant posterior of parameters can be implicitly inferred by aggregating posteriors on different training domains. Our assumption is more relaxed and allows us to extract more domain invariant information. We also propose a simple yet effective method, named PosTerior Generalization (PTG), that can be used to estimate the invariant parameter distribution. PTG fully exploits variational inference to approximate parameter distributions, including the invariant posterior and the posteriors on training domains. Furthermore, we develop a lite version of PTG for widespread applications. PTG shows competitive performance on various domain generalization benchmarks on DomainBed. Additionally, PTG can use any existing domain generalization methods as its prior, and combined with previous state-of-the-art method the performance can be further improved. Code will be made public

    Transfer of spin to orbital angular momentum in the Bethe-Heitler process

    Full text link
    According to the conservation of angular momentum, when a plane-wave polarized photon splits into a pair of electron-positron under the influence of the Coulomb field, the spin angular momentum (SAM) of the photon is converted into the angular momentum of the leptons. We investigate this process (the Bethe-Heitler process) by describing the final electron and positron with twisted states and find that the SAM of the incident photon is not only converted into SAM of the produced pair, but also into their orbital angular momentum (OAM), which has not been considered previously. The average OAM gained by the leptons surpasses the average SAM, while their orientations coincide. Both properties depend on the energy and open angle of the emitted leptons. The demonstrated spin-orbit transfer shown in the Bethe-Heitler process may exist in a large group of QED scattering processes

    cuML-DSA: Optimized Signing Procedure and Server-Oriented GPU Design for ML-DSA

    Get PDF
    The threat posed by quantum computing has precipitated an urgent need for post-quantum cryptography. Recently, the post-quantum digital signature draft FIPS 204 has been published, delineating the details of the ML-DSA, which is derived from the CRYSTALS-Dilithium. Despite these advancements, server environments, especially those equipped with GPU devices necessitating high-throughput signing, remain entrenched in classical schemes. A conspicuous void exists in the realm of GPU implementation or server-specific designs for ML-DSA. In this paper, we propose the first server-oriented GPU design tailored for the ML-DSA signing procedure in high-throughput servers. We introduce several innovative theoretical optimizations to bolster performance, including depth-prior sparse ternary polynomial multiplication, the branch elimination method, and the rejection-prioritized checking order. Furthermore, exploiting server-oriented features, we propose a comprehensive GPU hardware design, augmented by a suite of GPU implementation optimizations to further amplify performance. Additionally, we present variants for sampling sparse polynomials, thereby streamlining our design. The deployment of our implementation on both server-grade and commercial GPUs demonstrates significant speedups, ranging from 170.7× to 294.2× against the CPU baseline, and an improvement of up to 60.9% compared to related work, affirming the effectiveness and efficiency of the proposed GPU architecture for ML-DSA signing procedure
    corecore