2,746 research outputs found
How do \u3ci\u3eArtemisia capillaris\u3c/i\u3e Population Respond to Grazing Management and Rain Reduction?
Climate and human activities, such as drought event and livestock grazing, generally coupled to influence the structure and function of grassland ecosystems. However, most of previous studies focused on the isolated effects of grazing or drought on grassland ecosystems, with little attention paid to the combined effects of them. Further, we know little about how plants respond to grazing and drought at the population level. We conducted a grazing regime (enclosure, stop grazing, and heavy grazing) and drought manipulation experiment in a typical steppe to explore how grassland plants response to ongoing drought and grazing regimes at the population level. We selected a dominant species, Artemisia capillaris, in this typical steppe as the research object and conducted a three-year observation of population traits (plant height, crown diameter, density of growth point and density of reproductive branch) of the Artemisia capillaries. Both grazing and drought reduced biomass of the Artemisia capillaris, but the effect of grazing regime was greater than that of drought. Different grazing regimes had a strong effect on the growth traits of Artemisia capillaris population in the early stages, while drought had a strong effect on them in the later stages. Reproduction of Artemisia capillaris were more responsive to grazing regime. Our results suggest that grazing and drought could alter reproductive and growth strategies of Artemisia capillaris
Recommended from our members
Linking business ecosystem and natural ecosystem together-a sustainable pathway for future industrialization
China has emerged as the second largest economy in the world during the globalization in the last forty years. However, in the last decade, Chinese manufacturing has also demonstrated its dark side causing wide range of concerns globally and directly jeopardize people’s health because of serious pollutions. How could the world keep its industrialization yet without damages to the natural environment? The paper proposes a new framework entitled ‘IE3’ by integrating three domains of knowledge—Industrial Entrepreneurship, Industrial Engineering and Industrial Ecology. The IE3 model provides a potential answer to the future development pathway for industrialization, changing from pursuit of quantity to quality via considering resources efficiency and ecology efficiency. The novelty of the research lies in incorporating three originally separated theories into a comprehensive system.</jats:p
CASOG: Conservative Actor-critic with SmOoth Gradient for Skill Learning in Robot-Assisted Intervention
Robot-assisted intervention has shown reduced radiation exposure to
physicians and improved precision in clinical trials. However, existing
vascular robotic systems follow master-slave control mode and entirely rely on
manual commands. This paper proposes a novel offline reinforcement learning
algorithm, Conservative Actor-critic with SmOoth Gradient (CASOG), to learn
manipulation skills from human demonstrations on vascular robotic systems. The
proposed algorithm conservatively estimates Q-function and smooths gradients of
convolution layers to deal with distribution shift and overfitting issues.
Furthermore, to focus on complex manipulations, transitions with larger
temporal-difference error are sampled with higher probability. Comparative
experiments in a pre-clinical environment demonstrate that CASOG can deliver
guidewire to the target at a success rate of 94.00\% and mean backward steps of
14.07, performing closer to humans and better than prior offline reinforcement
learning methods. These results indicate that the proposed algorithm is
promising to improve the autonomy of vascular robotic systems.Comment: 13 pages, 5 figure, preprin
Reduced expression of SMAD4 in gliomas correlates with progression and survival of patients
<p>Abstract</p> <p>Background</p> <p>To examine the expression of SMAD4 at gene and protein levels in glioma samples with different WHO grades and its association with survival.</p> <p>Methods</p> <p>Two hundreds fifty-two glioma specimens and 42 normal control tissues were collected. Immunochemistry assay, quantitative real-time PCR and Western blot analysis were carried out to investigate the expression of SMAD4. Kaplan-Meier method and Cox's proportional hazards model were used in survival analysis.</p> <p>Results</p> <p>Immunohistochemistry showed that SMAD4 expression was decreased in glioma. SMAD4 mRNA and protein levels were both lower in glioma compared to control on real-time PCR and Western blot analysis (both P < 0.001). In addition, its expression levels decrease from grade I to grade IV glioma according to the results of real-time PCR, immunohistochemistry analysis and Western blot. Moreover, the survival rate of SMAD4-positive patients was higher than that of SMAD4-negative patients. We further confirmed that the loss of SMAD4 was a significant and independent prognostic indicator in glioma by multivariate analysis.</p> <p>Conclusions</p> <p>Our data provides convincing evidence for the first time that the reduced expression of SMAD4 at gene and protein levels is correlated with poor outcome in patients with glioma. SMAD4 may play an inhibitive role during the development of glioma and may be a potential prognosis predictor of glioma.</p
DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning
Model-based reinforcement learning (RL), which learns environment model from
offline dataset and generates more out-of-distribution model data, has become
an effective approach to the problem of distribution shift in offline RL. Due
to the gap between the learned and actual environment, conservatism should be
incorporated into the algorithm to balance accurate offline data and imprecise
model data. The conservatism of current algorithms mostly relies on model
uncertainty estimation. However, uncertainty estimation is unreliable and leads
to poor performance in certain scenarios, and the previous methods ignore
differences between the model data, which brings great conservatism. Therefore,
this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm
(DOMAIN) without estimating model uncertainty to address the above issues.
DOMAIN introduces adaptive sampling distribution of model samples, which can
adaptively adjust the model data penalty. In this paper, we theoretically
demonstrate that the Q value learned by the DOMAIN outside the region is a
lower bound of the true Q value, the DOMAIN is less conservative than previous
model-based offline RL algorithms and has the guarantee of security policy
improvement. The results of extensive experiments show that DOMAIN outperforms
prior RL algorithms on the D4RL dataset benchmark, and achieves better
performance than other RL algorithms on tasks that require generalization.Comment: 13 pages, 6 figure
CROP: Conservative Reward for Model-based Offline Policy Optimization
Offline reinforcement learning (RL) aims to optimize policy using collected
data without online interactions. Model-based approaches are particularly
appealing for addressing offline RL challenges due to their capability to
mitigate the limitations of offline data through data generation using models.
Prior research has demonstrated that introducing conservatism into the model or
Q-function during policy optimization can effectively alleviate the prevalent
distribution drift problem in offline RL. However, the investigation into the
impacts of conservatism in reward estimation is still lacking. This paper
proposes a novel model-based offline RL algorithm, Conservative Reward for
model-based Offline Policy optimization (CROP), which conservatively estimates
the reward in model training. To achieve a conservative reward estimation, CROP
simultaneously minimizes the estimation error and the reward of random actions.
Theoretical analysis shows that this conservative reward mechanism leads to a
conservative policy evaluation and helps mitigate distribution drift.
Experiments on D4RL benchmarks showcase that the performance of CROP is
comparable to the state-of-the-art baselines. Notably, CROP establishes an
innovative connection between offline and online RL, highlighting that offline
RL problems can be tackled by adopting online RL techniques to the empirical
Markov decision process trained with a conservative reward. The source code is
available with https://github.com/G0K0URURI/CROP.git
Recommended from our members
Basaltic and Solution Reference Materials for Iron, Copper and Zinc Isotope Measurements
Iron, Cu and Zn stable isotope systems are applied in constraining a variety of geochemical and environmental processes. Secondary reference materials have been developed by the Institute of Geology, Chinese Academy of Geological Sciences (CAGS), in collaboration with other participating laboratories, comprising three solutions (CAGS-Fe, CAGS-Cu and CAGS-Zn) and one basalt (CAGS-Basalt). These materials exhibit sufficient homogeneity and stability for application in Fe, Cu and Zn isotopic ratio determinations. Reference values were determined by inter-laboratory analytical comparisons involving up to eight participating laboratories employing MC-ICP-MS techniques, based on the unweighted means of submitted results. Isotopic compositions are reported in per mil notation, based on reference materials IRMM-014 for Fe, NIST SRM 976 for Cu and IRMM-3702 for Zn. Respective reference values of CAGS-Fe, CAGS-Cu and CAGS-Zn solutions are as follows: δ56Fe = 0.83 ± 0.06 and δ57Fe = 1.20 ± 0.12, δ65Cu = 0.57 ± 0.05, and δ66Zn = -0.79 ± 0.12 and δ68Zn = -1.65 ± 0.24, respectively. Those of CAGS-Basalt are δ56Fe = 0.15 ± 0.05, δ57Fe = 0.22 ± 0.05, δ65Cu = 0.12 ± 0.07, δ66Zn = 0.17 ± 0.11, and δ68Zn = 0.34 ± 0.21 (2s)
- …