401 research outputs found

    Association between ERCC1 and TS mRNA levels and disease free survival in colorectal cancer patients receiving oxaliplatin and fluorouracil (5-FU) adjuvant chemotherapy

    Get PDF
    BACKGROUND: Aim was to explore the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese colorectal cancer (CRC) patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy. METHODS: Total 112 Chinese stage II-III CRC patients were respectively treated by four different chemotherapy regimens after curative operation. The TS and ERCC1 mRNA levels in primary tumor were measured by real-time RT-PCR. Kaplan–Meier curves and log-rank tests were used for DFS analysis. The Cox proportional hazards model was used for prognostic analysis. RESULTS: In univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated no significant association of DFS with the TS and ERCC1 mRNA levels. In multivariate analyses, tumor stage (IIIc: reference, P = 0.083; IIb: HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; IIc: HR < 0.0001, P = 0.977; IIIa: HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) was confirmed to be the independent prognostic factor for DFS. Moreover, the Kaplan-Meier DFS curves showed that TS and ERCC1 mRNA levels were not significantly associated with the DFS (TS: P = 0.264; ERCC1: P = 0.484). CONCLUSION: The mRNA expression of ERCC1 and TS were not applicable to predict the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy

    GOATS: Goal Sampling Adaptation for Scooping with Curriculum Reinforcement Learning

    Full text link
    In this work, we first formulate the problem of robotic water scooping using goal-conditioned reinforcement learning. This task is particularly challenging due to the complex dynamics of fluids and the need to achieve multi-modal goals. The policy is required to successfully reach both position goals and water amount goals, which leads to a large convoluted goal state space. To overcome these challenges, we introduce Goal Sampling Adaptation for Scooping (GOATS), a curriculum reinforcement learning method that can learn an effective and generalizable policy for robot scooping tasks. Specifically, we use a goal-factorized reward formulation and interpolate position goal distributions and amount goal distributions to create curriculum throughout the learning process. As a result, our proposed method can outperform the baselines in simulation and achieves 5.46% and 8.71% amount errors on bowl scooping and bucket scooping tasks, respectively, under 1000 variations of initial water states in the tank and a large goal state space. Besides being effective in simulation environments, our method can efficiently adapt to noisy real-robot water-scooping scenarios with diverse physical configurations and unseen settings, demonstrating superior efficacy and generalizability. The videos of this work are available on our project page: https://sites.google.com/view/goatscooping
    • …
    corecore