176 research outputs found

    Interfacial Interaction Enhanced Rheological Behavior in PAM/CTAC/Salt Aqueous Solution—A Coarse-Grained Molecular Dynamics Study

    Get PDF
    Interfacial interactions within a multi-phase polymer solution play critical roles in processing control and mass transportation in chemical engineering. However, the understandings of these roles remain unexplored due to the complexity of the system. In this study, we used an efficient analytical method—a nonequilibrium molecular dynamics (NEMD) simulation—to unveil the molecular interactions and rheology of a multiphase solution containing cetyltrimethyl ammonium chloride (CTAC), polyacrylamide (PAM), and sodium salicylate (NaSal). The associated macroscopic rheological characteristics and shear viscosity of the polymer/surfactant solution were investigated, where the computational results agreed well with the experimental data. The relation between the characteristic time and shear rate was consistent with the power law. By simulating the shear viscosity of the polymer/surfactant solution, we found that the phase transition of micelles within the mixture led to a non-monotonic increase in the viscosity of the mixed solution with the increase in concentration of CTAC or PAM. We expect this optimized molecular dynamic approach to advance the current understanding on chemical–physical interactions within polymer/surfactant mixtures at the molecular level and enable emerging engineering solutions

    Wasserstein Distance guided Adversarial Imitation Learning with Reward Shape Exploration

    Full text link
    The generative adversarial imitation learning (GAIL) has provided an adversarial learning framework for imitating expert policy from demonstrations in high-dimensional continuous tasks. However, almost all GAIL and its extensions only design a kind of reward function of logarithmic form in the adversarial training strategy with the Jensen-Shannon (JS) divergence for all complex environments. The fixed logarithmic type of reward function may be difficult to solve all complex tasks, and the vanishing gradients problem caused by the JS divergence will harm the adversarial learning process. In this paper, we propose a new algorithm named Wasserstein Distance guided Adversarial Imitation Learning (WDAIL) for promoting the performance of imitation learning (IL). There are three improvements in our method: (a) introducing the Wasserstein distance to obtain more appropriate measure in the adversarial training process, (b) using proximal policy optimization (PPO) in the reinforcement learning stage which is much simpler to implement and makes the algorithm more efficient, and (c) exploring different reward function shapes to suit different tasks for improving the performance. The experiment results show that the learning procedure remains remarkably stable, and achieves significant performance in the complex continuous control tasks of MuJoCo.Comment: M. Zhang and Y. Wang contribute equally to this wor

    SEABO: A Simple Search-Based Method for Offline Imitation Learning

    Full text link
    Offline reinforcement learning (RL) has attracted much attention due to its ability in learning from static offline datasets and eliminating the need of interacting with the environment. Nevertheless, the success of offline RL relies heavily on the offline transitions annotated with reward labels. In practice, we often need to hand-craft the reward function, which is sometimes difficult, labor-intensive, or inefficient. To tackle this challenge, we set our focus on the offline imitation learning (IL) setting, and aim at getting a reward function based on the expert data and unlabeled data. To that end, we propose a simple yet effective search-based offline IL method, tagged SEABO. SEABO allocates a larger reward to the transition that is close to its closest neighbor in the expert demonstration, and a smaller reward otherwise, all in an unsupervised learning manner. Experimental results on a variety of D4RL datasets indicate that SEABO can achieve competitive performance to offline RL algorithms with ground-truth rewards, given only a single expert trajectory, and can outperform prior reward learning and offline IL methods across many tasks. Moreover, we demonstrate that SEABO also works well if the expert demonstrations contain only observations. Our code is publicly available at https://github.com/dmksjfl/SEABO.Comment: To appear in ICLR202
    • …
    corecore