273 research outputs found

    Intelligent Methods for Condition Diagnosis of Plant Machinery

    Get PDF

    Nanofluids

    Get PDF

    Functionalization Methods of Carbon Nanotubes and Its Applications

    Get PDF

    Prox-DBRO-VR: A Unified Analysis on Decentralized Byzantine-Resilient Composite Stochastic Optimization with Variance Reduction and Non-Asymptotic Convergence Rates

    Full text link
    Decentralized Byzantine-resilient stochastic gradient algorithms resolve efficiently large-scale optimization problems in adverse conditions, such as malfunctioning agents, software bugs, and cyber attacks. This paper targets on handling a class of generic composite optimization problems over multi-agent cyberphysical systems (CPSs), with the existence of an unknown number of Byzantine agents. Based on the proximal mapping method, two variance-reduced (VR) techniques, and a norm-penalized approximation strategy, we propose a decentralized Byzantine-resilient and proximal-gradient algorithmic framework, dubbed Prox-DBRO-VR, which achieves an optimization and control goal using only local computations and communications. To reduce asymptotically the variance generated by evaluating the noisy stochastic gradients, we incorporate two localized variance-reduced techniques (SAGA and LSVRG) into Prox-DBRO-VR, to design Prox-DBRO-SAGA and Prox-DBRO-LSVRG. Via analyzing the contraction relationships among the gradient-learning error, robust consensus condition, and optimal gap, the theoretical result demonstrates that both Prox-DBRO-SAGA and Prox-DBRO-LSVRG, with a well-designed constant (resp., decaying) step-size, converge linearly (resp., sub-linearly) inside an error ball around the optimal solution to the optimization problem under standard assumptions. The trade-offs between the convergence accuracy and the number of Byzantine agents in both linear and sub-linear cases are characterized. In simulation, the effectiveness and practicability of the proposed algorithms are manifested via resolving a sparse machine-learning problem over multi-agent CPSs under various Byzantine attacks.Comment: 14 pages, 0 figure

    Large Language Models at Work in China's Labor Market

    Full text link
    This paper explores the potential impacts of large language models (LLMs) on the Chinese labor market. We analyze occupational exposure to LLM capabilities by incorporating human expertise and LLM classifications, following Eloundou et al. (2023)'s methodology. We then aggregate occupation exposure to the industry level to obtain industry exposure scores. The results indicate a positive correlation between occupation exposure and wage levels/experience premiums, suggesting higher-paying and experience-intensive jobs may face greater displacement risks from LLM-powered software. The industry exposure scores align with expert assessments and economic intuitions. We also develop an economic growth model incorporating industry exposure to quantify the productivity-employment trade-off from AI adoption. Overall, this study provides an analytical basis for understanding the labor market impacts of increasingly capable AI systems in China. Key innovations include the occupation-level exposure analysis, industry aggregation approach, and economic modeling incorporating AI adoption and labor market effects. The findings will inform policymakers and businesses on strategies for maximizing the benefits of AI while mitigating adverse disruption risks
    • …
    corecore