583 research outputs found

    Review of Internet for English Teaching

    Get PDF

    Experimental and theoretical investigation of flow measurement by doppler ultrasound

    Get PDF

    Higgs boson production and decay at e+ee^{+}e^{-} colliders as a probe of the Left-Right twin Higgs model

    Get PDF
    In the framework of the Left-Right twin Higgs (LRTH) model, we consider the constrains from the latest search for high-mass dilepton resonances at the LHC and find that the heavy neutral boson ZHZ_H is excluded with mass below 2.76 TeV. Under these constrains, we study the Higgs-Gauge coupling production processes e+eZHe^{+}e^{-}\rightarrow ZH, e+eνeνeˉHe^{+}e^{-}\rightarrow \nu_{e}\bar{\nu_{e}}H and e+ee+eHe^{+}e^{-}\rightarrow e^{+}e^{-}H, top quark Yukawa coupling production process e+ettˉHe^{+}e^{-}\rightarrow t\bar{t}H, Higgs self-couplings production processes e+eZHHe^{+}e^{-}\rightarrow ZHH and e+eνeνeˉHHe^{+}e^{-}\rightarrow \nu_{e}\bar{\nu_{e}}HH at e+ee^{+}e^{-} colliders. Besides, we study the major decay modes of the Higgs boson, namely hffˉh\rightarrow f\bar{f}(f=b,c,τf=b,c,\tau), VV(V=W,Z)VV^{*}(V=W, Z), gggg, γγ\gamma\gamma. We find that the LRTH effects are sizable so that the Higgs boson processes at e+ee^{+}e^{-} collider can be a sensitive probe for the LRTH model.Comment: Final version to appear in Nucl.Phys.

    Learning Graph Neural Networks with Approximate Gradient Descent

    Full text link
    The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden layer for node information convolution is provided in this paper. Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs. A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed. The algorithm proposed is applicable to a wide range of activation functions including ReLU, Leaky ReLU, Sigmod, Softplus and Swish. It is shown that the proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs. For both types of GNNs, sample complexity in terms of the number of nodes or the number of graphs is characterized. The impact of feature dimension and GNN structure on the convergence rate is also theoretically characterized. Numerical experiments are further provided to validate our theoretical analysis.Comment: 23 pages, accepted at AAAI 202

    Valley Carrier Dynamics in Monolayer Molybdenum Disulphide from Helicity Resolved Ultrafast Pump-probe Spectroscopy

    Full text link
    We investigate the valley related carrier dynamics in monolayer MoS2 using helicity resolved non-degenerate ultrafast pump-probe spectroscopy at the vicinity of the high-symmetry K point under the temperature down to 78 K. Monolayer MoS2 shows remarkable transient reflection signals, in stark contrast to bilayer and bulk MoS2 due to the enhancement of many-body effect at reduced dimensionality. The helicity resolved ultrafast time-resolved result shows that the valley polarization is preserved for only several ps before scattering process makes it undistinguishable. We suggest that the dynamical degradation of valley polarization is attributable primarily to the exciton trapping by defect states in the exfoliated MoS2 samples. Our experiment and a tight-binding model analysis also show that the perfect valley CD selectivity is fairly robust against disorder at the K point, but quickly decays from the high-symmetry point in the momentum space in the presence of disorder.Comment: 15 pages,Accepted by ACS Nan

    Split Federated Learning: Speed up Model Training in Resource-Limited Wireless Networks

    Full text link
    In this paper, we propose a novel distributed learning scheme, named group-based split federated learning (GSFL), to speed up artificial intelligence (AI) model training. Specifically, the GSFL operates in a split-then-federated manner, which consists of three steps: 1) Model distribution, in which the access point (AP) splits the AI models and distributes the client-side models to clients; 2) Model training, in which each client executes forward propagation and transmit the smashed data to the edge server. The edge server executes forward and backward propagation and then returns the gradient to the clients for updating local client-side models; and 3) Model aggregation, in which edge servers aggregate the server-side and client-side models. Simulation results show that the GSFL outperforms vanilla split learning and federated learning schemes in terms of overall training latency while achieving satisfactory accuracy

    Coresets for Clustering with General Assignment Constraints

    Full text link
    Designing small-sized \emph{coresets}, which approximately preserve the costs of the solutions for large datasets, has been an important research direction for the past decade. We consider coreset construction for a variety of general constrained clustering problems. We significantly extend and generalize the results of a very recent paper (Braverman et al., FOCS'22), by demonstrating that the idea of hierarchical uniform sampling (Chen, SICOMP'09; Braverman et al., FOCS'22) can be applied to efficiently construct coresets for a very general class of constrained clustering problems with general assignment constraints, including capacity constraints on cluster centers, and assignment structure constraints for data points (modeled by a convex body B)\mathcal{B}). Our main theorem shows that a small-sized ϵ\epsilon-coreset exists as long as a complexity measure Lip(B)\mathsf{Lip}(\mathcal{B}) of the structure constraint, and the \emph{covering exponent} Λϵ(X)\Lambda_\epsilon(\mathcal{X}) for metric space (X,d)(\mathcal{X},d) are bounded. The complexity measure Lip(B)\mathsf{Lip}(\mathcal{B}) for convex body B\mathcal{B} is the Lipschitz constant of a certain transportation problem constrained in B\mathcal{B}, called \emph{optimal assignment transportation problem}. We prove nontrivial upper bounds of Lip(B)\mathsf{Lip}(\mathcal{B}) for various polytopes, including the general matroid basis polytopes, and laminar matroid polytopes (with better bound). As an application of our general theorem, we construct the first coreset for the fault-tolerant clustering problem (with or without capacity upper/lower bound) for the above metric spaces, in which the fault-tolerance requirement is captured by a uniform matroid basis polytope
    corecore