129 research outputs found

    PM2.5-Related Health Economic Benefits Evaluation Based on Air Improvement Action Plan in Wuhan City, Middle China

    Get PDF
    On the basis of PM2.5 data of the national air quality monitoring sites, local population data, and baseline all-cause mortality rate, PM2.5-related health economic benefits of the Air Improvement Action Plan implemented in Wuhan in 2013–2017 were investigated using health-impact and valuation functions. Annual avoided premature deaths driven by the average concentration of PM2.5 decrease were evaluated, and the economic benefits were computed by using the value of statistical life (VSL) method. Results showed that the number of avoided premature deaths in Wuhan are 21,384 (95% confidence interval (CI): 15,004 to 27,255) during 2013–2017, due to the implementation of the Air Improvement Action Plan. According to the VSL method, the obtained economic benefits of Huangpi, Wuchang, Hongshan, Xinzhou, Jiang’an, Hanyang, Jiangxia, Qiaokou, Jianghan, Qingshan, Caidian, Dongxihu, and Hannan District were 8.55, 8.19, 8.04, 7.39, 5.78, 4.84, 4.37, 4.04, 3.90, 3.30, 2.87, 2.42, and 0.66 billion RMB (1 RMB = 0.1417 USD On 14 October 2019), respectively. These economic benefits added up to 64.35 billion RMB (95% CI: 45.15 to 82.02 billion RMB), accounting for 4.80% (95% CI: 3.37% to 6.12%) of the total GDP of Wuhan in 2017. Therefore, in the process of formulating a regional air quality improvement scheme, apart from establishing hierarchical emission-reduction standards and policies, policy makers should give integrated consideration to the relationship between regional economic development, environmental protection and residents’ health benefits. Furthermore, for improving air quality, air quality compensation mechanisms can be established on the basis of the status quo and trends of air quality, population distribution, and economic development factors

    Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy

    Full text link
    Model-based reinforcement learning (RL) often achieves higher sample efficiency in practice than model-free RL by learning a dynamics model to generate samples for policy learning. Previous works learn a dynamics model that fits under the empirical state-action visitation distribution for all historical policies, i.e., the sample replay buffer. However, in this paper, we observe that fitting the dynamics model under the distribution for \emph{all historical policies} does not necessarily benefit model prediction for the \emph{current policy} since the policy in use is constantly evolving over time. The evolving policy during training will cause state-action visitation distribution shifts. We theoretically analyze how this distribution shift over historical policies affects the model learning and model rollouts. We then propose a novel dynamics model learning method, named \textit{Policy-adapted Dynamics Model Learning (PDML)}. PDML dynamically adjusts the historical policy mixture distribution to ensure the learned model can continually adapt to the state-action visitation distribution of the evolving policy. Experiments on a range of continuous control environments in MuJoCo show that PDML achieves significant improvement in sample efficiency and higher asymptotic performance combined with the state-of-the-art model-based RL methods.Comment: 16 pages, 5 figure

    MoCoSA: Momentum Contrast for Knowledge Graph Completion with Structure-Augmented Pre-trained Language Models

    Full text link
    Knowledge Graph Completion (KGC) aims to conduct reasoning on the facts within knowledge graphs and automatically infer missing links. Existing methods can mainly be categorized into structure-based or description-based. On the one hand, structure-based methods effectively represent relational facts in knowledge graphs using entity embeddings. However, they struggle with semantically rich real-world entities due to limited structural information and fail to generalize to unseen entities. On the other hand, description-based methods leverage pre-trained language models (PLMs) to understand textual information. They exhibit strong robustness towards unseen entities. However, they have difficulty with larger negative sampling and often lag behind structure-based methods. To address these issues, in this paper, we propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA), which allows the PLM to perceive the structural information by the adaptable structure encoder. To improve learning efficiency, we proposed momentum hard negative and intra-relation negative sampling. Experimental results demonstrate that our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500

    Emojis Decoded: Leveraging ChatGPT for Enhanced Understanding in Social Media Communications

    Full text link
    Emojis, which encapsulate semantics beyond mere words or phrases, have become prevalent in social network communications. This has spurred increasing scholarly interest in exploring their attributes and functionalities. However, emoji-related research and application face two primary challenges. First, researchers typically rely on crowd-sourcing to annotate emojis in order to understand their sentiments, usage intentions, and semantic meanings. Second, subjective interpretations by users can often lead to misunderstandings of emojis and cause the communication barrier. Large Language Models (LLMs) have achieved significant success in various annotation tasks, with ChatGPT demonstrating expertise across multiple domains. In our study, we assess ChatGPT's effectiveness in handling previously annotated and downstream tasks. Our objective is to validate the hypothesis that ChatGPT can serve as a viable alternative to human annotators in emoji research and that its ability to explain emoji meanings can enhance clarity and transparency in online communications. Our findings indicate that ChatGPT has extensive knowledge of emojis. It is adept at elucidating the meaning of emojis across various application scenarios and demonstrates the potential to replace human annotators in a range of tasks.Comment: 12 pages, 2 page appendi

    Morphological and Physiological Changes in Sedum spectabile during Flower Formation Induced by Photoperiod

    Get PDF
    Sedum spectabile is an ornamental herbaceous perennial considered as a long-day plant. Varying levels of hormones and sugars possibly affect flower bud formation. This study aimed to determine the changes in endogenous hormones, sugars, and respiration levels in leaves and in apical buds. In addition, the current research was also conducted to observe the morphological changes during the induction, initiation and development of flower buds. Results showed that the periods of floral induction, initiation and development of S. spectabile were the period from 0 d to 1 d, 2 d to 10 d and after 11 d respectively under long day of 20 hours. High zeatin level in apical buds was conducive to floral induction; the increasing levels of gibberrelin and indole acetic acid favor floral initiation; floral development was regulated by mutually synergistic and antagonistic relationships of hormones. The total starch content in leaves remarkably decreased during floral induction. Moreover, soluble sugar content increased and reached the maximum level at 20 d of the treatment period. Afterward, soluble sugar content declined rapidly and was probably transported to the apical buds for rapid floral development. Furthermore, the total respiration of leaves maintained an upward trend; the cytochrome pathway also maintained an increasing trend after the plants were treated for 20 d. Such changes may favour the morphological differentiation of apical buds in floral development

    COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL

    Full text link
    Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration using current policy for dynamics model learning. However, due to the complex real-world environment, it is inevitable to learn an imperfect dynamics model with model prediction error, which can further mislead policy learning and result in sub-optimal solutions. In this paper, we propose COPlanner\texttt{COPlanner}, a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem with conservative model rollouts and optimistic environment exploration. COPlanner\texttt{COPlanner} leverages an uncertainty-aware policy-guided model predictive control (UP-MPC) component to plan for multi-step uncertainty estimation. This estimated uncertainty then serves as a penalty during model rollouts and as a bonus during real environment exploration respectively, to choose actions. Consequently, COPlanner\texttt{COPlanner} can avoid model uncertain regions through conservative model rollouts, thereby alleviating the influence of model error. Simultaneously, it explores high-reward model uncertain regions to reduce model error actively through optimistic real environment exploration. COPlanner\texttt{COPlanner} is a plug-and-play framework that can be applied to any dyna-style model-based methods. Experimental results on a series of proprioceptive and visual continuous control tasks demonstrate that both sample efficiency and asymptotic performance of strong model-based methods are significantly improved combined with COPlanner\texttt{COPlanner}.Comment: 22 pages, 17 figure
    • …
    corecore