2,746 research outputs found

    How do \u3ci\u3eArtemisia capillaris\u3c/i\u3e Population Respond to Grazing Management and Rain Reduction?

    Get PDF
    Climate and human activities, such as drought event and livestock grazing, generally coupled to influence the structure and function of grassland ecosystems. However, most of previous studies focused on the isolated effects of grazing or drought on grassland ecosystems, with little attention paid to the combined effects of them. Further, we know little about how plants respond to grazing and drought at the population level. We conducted a grazing regime (enclosure, stop grazing, and heavy grazing) and drought manipulation experiment in a typical steppe to explore how grassland plants response to ongoing drought and grazing regimes at the population level. We selected a dominant species, Artemisia capillaris, in this typical steppe as the research object and conducted a three-year observation of population traits (plant height, crown diameter, density of growth point and density of reproductive branch) of the Artemisia capillaries. Both grazing and drought reduced biomass of the Artemisia capillaris, but the effect of grazing regime was greater than that of drought. Different grazing regimes had a strong effect on the growth traits of Artemisia capillaris population in the early stages, while drought had a strong effect on them in the later stages. Reproduction of Artemisia capillaris were more responsive to grazing regime. Our results suggest that grazing and drought could alter reproductive and growth strategies of Artemisia capillaris

    CASOG: Conservative Actor-critic with SmOoth Gradient for Skill Learning in Robot-Assisted Intervention

    Full text link
    Robot-assisted intervention has shown reduced radiation exposure to physicians and improved precision in clinical trials. However, existing vascular robotic systems follow master-slave control mode and entirely rely on manual commands. This paper proposes a novel offline reinforcement learning algorithm, Conservative Actor-critic with SmOoth Gradient (CASOG), to learn manipulation skills from human demonstrations on vascular robotic systems. The proposed algorithm conservatively estimates Q-function and smooths gradients of convolution layers to deal with distribution shift and overfitting issues. Furthermore, to focus on complex manipulations, transitions with larger temporal-difference error are sampled with higher probability. Comparative experiments in a pre-clinical environment demonstrate that CASOG can deliver guidewire to the target at a success rate of 94.00\% and mean backward steps of 14.07, performing closer to humans and better than prior offline reinforcement learning methods. These results indicate that the proposed algorithm is promising to improve the autonomy of vascular robotic systems.Comment: 13 pages, 5 figure, preprin

    New steroids from Adenophora stenanthina subsp. xifengensis

    Get PDF
    293-29

    Reduced expression of SMAD4 in gliomas correlates with progression and survival of patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To examine the expression of SMAD4 at gene and protein levels in glioma samples with different WHO grades and its association with survival.</p> <p>Methods</p> <p>Two hundreds fifty-two glioma specimens and 42 normal control tissues were collected. Immunochemistry assay, quantitative real-time PCR and Western blot analysis were carried out to investigate the expression of SMAD4. Kaplan-Meier method and Cox's proportional hazards model were used in survival analysis.</p> <p>Results</p> <p>Immunohistochemistry showed that SMAD4 expression was decreased in glioma. SMAD4 mRNA and protein levels were both lower in glioma compared to control on real-time PCR and Western blot analysis (both P < 0.001). In addition, its expression levels decrease from grade I to grade IV glioma according to the results of real-time PCR, immunohistochemistry analysis and Western blot. Moreover, the survival rate of SMAD4-positive patients was higher than that of SMAD4-negative patients. We further confirmed that the loss of SMAD4 was a significant and independent prognostic indicator in glioma by multivariate analysis.</p> <p>Conclusions</p> <p>Our data provides convincing evidence for the first time that the reduced expression of SMAD4 at gene and protein levels is correlated with poor outcome in patients with glioma. SMAD4 may play an inhibitive role during the development of glioma and may be a potential prognosis predictor of glioma.</p

    DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning

    Full text link
    Model-based reinforcement learning (RL), which learns environment model from offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. Therefore, this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm (DOMAIN) without estimating model uncertainty to address the above issues. DOMAIN introduces adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this paper, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms and has the guarantee of security policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms on the D4RL dataset benchmark, and achieves better performance than other RL algorithms on tasks that require generalization.Comment: 13 pages, 6 figure

    CROP: Conservative Reward for Model-based Offline Policy Optimization

    Full text link
    Offline reinforcement learning (RL) aims to optimize policy using collected data without online interactions. Model-based approaches are particularly appealing for addressing offline RL challenges due to their capability to mitigate the limitations of offline data through data generation using models. Prior research has demonstrated that introducing conservatism into the model or Q-function during policy optimization can effectively alleviate the prevalent distribution drift problem in offline RL. However, the investigation into the impacts of conservatism in reward estimation is still lacking. This paper proposes a novel model-based offline RL algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), which conservatively estimates the reward in model training. To achieve a conservative reward estimation, CROP simultaneously minimizes the estimation error and the reward of random actions. Theoretical analysis shows that this conservative reward mechanism leads to a conservative policy evaluation and helps mitigate distribution drift. Experiments on D4RL benchmarks showcase that the performance of CROP is comparable to the state-of-the-art baselines. Notably, CROP establishes an innovative connection between offline and online RL, highlighting that offline RL problems can be tackled by adopting online RL techniques to the empirical Markov decision process trained with a conservative reward. The source code is available with https://github.com/G0K0URURI/CROP.git
    corecore