1,425 research outputs found

    Searching for the signal of dark matter and photon associated production at the LHC beyond leading order

    Full text link
    We study the signal of dark matter and photon associated production induced by the vector and axial-vector operators at the LHC, including the QCD next-to-leading order (NLO) effects. We find that the QCD NLO corrections reduce the dependence of the total cross sections on the factorization and renormalization scales, and the KK factors increase with the increasing of the dark matter mass, which can be as large as about 1.3 for both the vector and axial-vector operators. Using our QCD NLO results, we improve the constraints on the new physics scale from the results of the recent CMS experiment. Moreover, we show the Monte Carlo simulation results for detecting the \gamma+\Slash{E}_{T} signal at the QCD NLO level, and present the integrated luminosity needed for a 5σ5\sigma discovery at the 14 TeV LHC . If the signal is not observed, the lower limit on the new physics scale can be set.Comment: 19 pages, 18 figures, 2 tables, version published in Phys.Rev.

    Phenomenology of an Extended Higgs Portal Inflation Model after Planck 2013

    Full text link
    We consider an extended inflation model in the frame of Higgs portal model, assuming a nonminimal coupling of the scalar field to the gravity. Using the new data from Planck 20132013 and other relevant astrophysical data, we obtain the relation between the nonminimal coupling ξ\xi and the self-coupling λ\lambda needed to drive the inflation, and find that this inflationary model is favored by the astrophysical data. Furthermore, we discuss the constraints on the model parameters from the experiments of particle physics, especially the recent Higgs data at the LHC.Comment: 21 pages, 8 figures; Version published in EPJ

    Constraints on flavor-changing neutral-current HtqHtq couplings from the signal of tHtH associated production with QCD next-to-leading order accuracy at the LHC

    Full text link
    We study a generic Higgs boson and a top quark associated production via model-independent flavor-changing neutral-current couplings at the LHC, including complete QCD next-to-leading order (NLO) corrections to the production and decay of the top quark and the Higgs boson. We find that QCD NLO corrections can increase the total production cross sections by about 48.9% and 57.9% for the HtuHtu and HtcHtc coupling induced processes at the LHC, respectively. After kinematic cuts are imposed on the decay products of the top quark and the Higgs boson, the QCD NLO corrections are reduced to 11% for the HtuHtu coupling induced process and almost vanish for the HtcHtc coupling induced process. Moreover, QCD NLO corrections reduce the dependence of the total cross sections on the renormalization and factorization scales. We also discuss signals of the tHtH associated production with the decay mode t \rightarrow bl^{+}E \slash_{T}, H \rightarrow b\bar{b} and ttˉt\bar{t} production with the decay mode \bar{t} \rightarrow H\bar{q}, t\rightarrow bl^{+}E \slash_{T}. Our results show that, in some parameter regions, the LHC may observe the above signals at the 5σ5\sigma level. Otherwise, the upper limits on the FCNC HtqHtq couplings can be set.Comment: 28 pages, 14 figures, 5 tables; version published in PR

    Architecture Decisions in AI-based Systems Development: An Empirical Study

    Full text link
    Artificial Intelligence (AI) technologies have been developed rapidly, and AI-based systems have been widely used in various application domains with opportunities and challenges. However, little is known about the architecture decisions made in AI-based systems development, which has a substantial impact on the success and sustainability of these systems. To this end, we conducted an empirical study by collecting and analyzing the data from Stack Overflow (SO) and GitHub. More specifically, we searched on SO with six sets of keywords and explored 32 AI-based projects on GitHub, and finally we collected 174 posts and 128 GitHub issues related to architecture decisions. The results show that in AI-based systems development (1) architecture decisions are expressed in six linguistic patterns, among which Solution Proposal and Information Giving are most frequently used, (2) Technology Decision, Component Decision, and Data Decision are the main types of architecture decisions made, (3) Game is the most common application domain among the eighteen application domains identified, (4) the dominant quality attribute considered in architecture decision-making is Performance, and (5) the main limitations and challenges encountered by practitioners in making architecture decisions are Design Issues and Data Issues. Our results suggest that the limitations and challenges when making architecture decisions in AI-based systems development are highly specific to the characteristics of AI-based systems and are mainly of technical nature, which need to be properly confronted.Comment: The 30th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER

    The development and applications of ultrafast electron nanocrystallography

    Full text link
    We review the development of ultrafast electron nanocrystallography as a method for investigating structural dynamics for nanoscale materials and interfaces. Its sensitivity and resolution are demonstrated in the studies of surface melting of gold nanocrystals, nonequilibrium transformation of graphite into reversible diamond-like intermediates, and molecular scale charge dynamics, showing a versatility for not only determining the structures, but also the charge and energy redistribution at interfaces. A quantitative scheme for three-dimensional retrieval of atomic structures is demonstrated with few-particle (< 1000) sensitivity, establishing this nanocrystallographic method as a tool for directly visualizing dynamics within isolated nanomaterials with atomic scale spatio-temporal resolution.Comment: 33 pages, 17 figures (Review article, 2008 conference of ultrafast electron microscopy conference and ultrafast sciences

    Enhanced Sparsification via Stimulative Training

    Full text link
    Sparsification-based pruning has been an important category in model compression. Existing methods commonly set sparsity-inducing penalty terms to suppress the importance of dropped weights, which is regarded as the suppressed sparsification paradigm. However, this paradigm inactivates the dropped parts of networks causing capacity damage before pruning, thereby leading to performance degradation. To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation. Besides, to find an optimal architecture for the pruned network, we propose a multi-dimension architecture space and a knowledge distillation-guided exploration strategy. To reduce the huge capacity gap of distillation, we propose a subnet mutating expansion technique. Extensive experiments on various benchmarks indicate the effectiveness of STP. Specifically, without fine-tuning, our method consistently achieves superior performance at different budgets, especially under extremely aggressive pruning scenarios, e.g., remaining 95.11% Top-1 accuracy (72.43% in 76.15%) while reducing 85% FLOPs for ResNet-50 on ImageNet. Codes will be released soon.Comment: 26 page
    • …
    corecore