4 research outputs found

    Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Graphical Model

    Full text link
    Integration of reinforcement learning and imitation learning is an important problem that has been studied for a long time in the field of intelligent robotics. Reinforcement learning optimizes policies to maximize the cumulative reward, whereas imitation learning attempts to extract general knowledge about the trajectories demonstrated by experts, i.e., demonstrators. Because each of them has their own drawbacks, methods combining them and compensating for each set of drawbacks have been explored thus far. However, many of the methods are heuristic and do not have a solid theoretical basis. In this paper, we present a new theory for integrating reinforcement and imitation learning by extending the probabilistic generative model framework for reinforcement learning, {\it plan by inference}. We develop a new probabilistic graphical model for reinforcement learning with multiple types of rewards and a probabilistic graphical model for Markov decision processes with multiple optimality emissions (pMDP-MO). Furthermore, we demonstrate that the integrated learning method of reinforcement learning and imitation learning can be formulated as a probabilistic inference of policies on pMDP-MO by considering the output of the discriminator in generative adversarial imitation learning as an additional optimal emission observation. We adapt the generative adversarial imitation learning and task-achievement reward to our proposed framework, achieving significantly better performance than agents trained with reinforcement learning or imitation learning alone. Experiments demonstrate that our framework successfully integrates imitation and reinforcement learning even when the number of demonstrators is only a few.Comment: Submitted to Advanced Robotic

    Stretching Actin Filaments within Cells Enhances their Affinity for the Myosin II Motor Domain

    Get PDF
    To test the hypothesis that the myosin II motor domain (S1) preferentially binds to specific subsets of actin filaments in vivo, we expressed GFP-fused S1 with mutations that enhanced its affinity for actin in Dictyostelium cells. Consistent with the hypothesis, the GFP-S1 mutants were localized along specific portions of the cell cortex. Comparison with rhodamine-phalloidin staining in fixed cells demonstrated that the GFP-S1 probes preferentially bound to actin filaments in the rear cortex and cleavage furrows, where actin filaments are stretched by interaction with endogenous myosin II filaments. The GFP-S1 probes were similarly enriched in the cortex stretched passively by traction forces in the absence of myosin II or by external forces using a microcapillary. The preferential binding of GFP-S1 mutants to stretched actin filaments did not depend on cortexillin I or PTEN, two proteins previously implicated in the recruitment of myosin II filaments to stretched cortex. These results suggested that it is the stretching of the actin filaments itself that increases their affinity for the myosin II motor domain. In contrast, the GFP-fused myosin I motor domain did not localize to stretched actin filaments, which suggests different preferences of the motor domains for different structures of actin filaments play a role in distinct intracellular localizations of myosin I and II. We propose a scheme in which the stretching of actin filaments, the preferential binding of myosin II filaments to stretched actin filaments, and myosin II-dependent contraction form a positive feedback loop that contributes to the stabilization of cell polarity and to the responsiveness of the cells to external mechanical stimuli

    Multi-view dreaming: multi-view world model with contrastive learning

    No full text
    In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming. Most current reinforcement learning method assumes a single-view observation space, and this imposes limitations on the observed data, such as lack of spatial information and occlusions. This makes obtaining ideal observational information from the environment difficult and is a bottleneck for real-world robotics applications. In this paper, we use contrastive learning to train a shared latent space between different viewpoints and show how the Products of Experts approach can be used to integrate and control the probability distributions of latent states for multiple viewpoints. We also propose Multi-View DreamingV2, a variant of Multi-View Dreaming that uses a categorical distribution to model the latent state instead of the Gaussian distribution. Experiments show that the proposed method outperforms simple extensions of existing methods in a realistic robot control task.</p

    Gastric-Type Adenocarcinoma of the Uterine Cervix Associated with Poor Response to Definitive Radiotherapy

    No full text
    We aimed to evaluate the response to definitive radiotherapy (RT) for cervical cancer based on histological subtypes and investigate prognostic factors in adenocarcinoma (AC). Of the 396 patients treated with definitive RT between January, 2010 and July, 2020, 327 patients met the inclusion criteria, including 275 with squamous cell carcinoma (SCC) and 52 with AC restaged based on the 2018 International Federation of Gynecology and Obstetrics staging system. Patient characteristics, response to RT, and prognoses of SCC and AC were evaluated. The complete response (CR) rates were 92.4% and 53.8% for SCC and AC, respectively (p p p p p < 0.05). Definitive RT for cervical cancer was significantly less effective for AC than for SCC. GAS was the only independent prognostic factor associated with non-CR in AC
    corecore