3,153 research outputs found

    SKA sensitivity for possible radio emission from dark matter in Omega Centauri

    Full text link
    Omega Centauri, the largest known globular cluster in the Milky Way, is believed to be the remains of a dwarf galaxy's core. Giving its potential abundance of dark matter (DM), it is an attractive target for investigating the nature of this elusive substance in our local environment. Our study demonstrates that by observing Omega Centauri with the SKA for 1000 hours, we can detect synchrotron radio or Inverse Compton (IC) emissions from the DM annihilation products. It enables us to constrain the cross-section of DM annihilation down to ∼10−30 cm3 s−1\sim {\rm 10^{-30}~cm^3~s^{-1}} for DM mass from several GeV\rm{GeV} to 100 GeV\rm{100~GeV}, which is much stronger compared with other observations. Additionally, we explore the axion, another well-motivated DM candidate, and provide stimulated decay calculations. It turns out that the sensitivity can reach gaγγ∼10−10 GeV−1g_{\rm{a\gamma\gamma}} \sim 10^{-10} ~\rm{GeV^{-1}} for 2×10−7 eV<ma<2×10−4 eV2\times 10^{-7} ~\rm{eV} < m_a < 2\times 10^{-4} ~\rm{eV}.Comment: 19 pages, 8 figure

    Design and simulation analysis of an improved lower limb exoskeleton

    Get PDF
    The lower extremity exoskeleton robot is a type of power assisted robot which can enhance the human walking function. A fundamental problem in the development of the exoskeleton is the choice of lightweight actuators. Thus in the mechanical structure design in this paper, the linear motor is selected as it greatly reduces the complexity of the mechanical structure. Furthermore, the limit switch inside the motor improves the safety performance. Based on the last version of the exoskeleton, the band positions, length adjusting holes and mechanical limit structures are increased. In addition, a control system based on DSP is designed. Furthermore, a kinematics analysis is carried out using the D-H parameter method and a dynamic analysis is developed using the Newton-Euler method. The driving force of every joint is obtained during the simulation using ADAMS software

    Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol

    Full text link
    Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput. To address this challenge, we design the \textbf{L}oss-tolerant \textbf{T}ransmission \textbf{P}rotocol (LTP), which permits partial loss of gradients during synchronization to avoid unneeded retransmission and contributes to faster synchronization per iteration. LTP implements loss-tolerant transmission through \textit{out-of-order transmission} and \textit{out-of-order Acknowledges (ACKs)}. LTP employs \textit{Early Close} to adjust the loss-tolerant threshold based on network conditions and \textit{Bubble Filling} for data correction to maintain training accuracy. LTP is implemented by C++ and integrated into PyTorch. Evaluations on a testbed of 8 worker nodes and one PS node demonstrate that LTP can significantly improve DML training task throughput by up to 30x compared to traditional TCP congestion controls, with no sacrifice to final accuracy.Comment: This paper will be published on IWQoS 2023. Preview version onl

    OSP: Boosting Distributed Model Training with 2-stage Synchronization

    Full text link
    Distributed deep learning (DDL) is a promising research area, which aims to increase the efficiency of training deep learning tasks with large size of datasets and models. As the computation capability of DDL nodes continues to increase, the network connection between nodes is becoming a major bottleneck. Various methods of gradient compression and improved model synchronization have been proposed to address this bottleneck in Parameter-Server-based DDL. However, these two types of methods can result in accuracy loss due to discarded gradients and have limited enhancement on the throughput of model synchronization, respectively. To address these challenges, we propose a new model synchronization method named Overlapped Synchronization Parallel (OSP), which achieves efficient communication with a 2-stage synchronization approach and uses Local-Gradient-based Parameter correction (LGP) to avoid accuracy loss caused by stale parameters. The prototype of OSP has been implemented using PyTorch and evaluated on commonly used deep learning models and datasets with a 9-node testbed. Evaluation results show that OSP can achieve up to 50\% improvement in throughput without accuracy loss compared to popular synchronization models.Comment: Copyright Owner/Author | ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record will be published in ICPP 202
    • …
    corecore