1,041,477 research outputs found

    Study of phase transition of Potts model with DANN

    Full text link
    A transfer learning method, domain adversarial neural network (DANN), is introduced to study the phase transition of two-dimensional q-state Potts model. With the DANN, we only need to choose a few labeled configurations automatically as input data, then the critical points can be obtained after training the algorithm. By an additional iterative process, the critical points can be captured to comparable accuracy to Monte Carlo simulations as we demonstrate it for q = 3, 5, 7 and 10. The type of phase transition (first or second-order) is also determined at the same time. Meanwhile, for the second-order phase transition at q = 3, we can calculate the critical exponent ν\nu by data collapse. Furthermore, compared with the traditional supervised learning, the DANN is of higher accuracy with lower cost.Comment: 25 pages, 23 figure

    Hierarchical reinforcement learning for real-time strategy games

    Get PDF
    Real-Time Strategy (RTS) games can be abstracted to resource allocation applicable in many fields and industries. We consider a simplified custom RTS game focused on mid-level combat using reinforcement learning (RL) algorithms. There are a number of contributions to game playing with RL in this paper. First, we combine hierarchical RL with a multi-layer perceptron (MLP) that receives higher-order inputs for increased learning speed and performance. Second, we compare Q-learning against Monte Carlo learning as reinforcement learning algorithms. Third, because the teams in the RTS game are multi-agent systems, we examine two different methods for assigning rewards to agents. Experiments are performed against two different fixed opponents. The results show that the combination of Q-learning and individual rewards yields the highest win-rate against the different opponents, and is able to defeat the opponent within 26 training games

    Encouraging children to think counterfactually enhances blocking in a causal learning task

    Get PDF
    According to a higher order reasoning account, inferential reasoning processes underpin the widely observed cue competition effect of blocking in causal learning. The inference required for blocking has been described as modus tollens (if p then q, not q therefore not p). Young children are known to have difficulties with this type of inference, but research with adults suggests that this inference is easier if participants think counterfactually. In this study, 100 children (51 five-year-olds and 49 six- to seven-year-olds) were assigned to two types of pretraining groups. The counterfactual group observed demonstrations of cues paired with outcomes and answered questions about what the outcome would have been if the causal status of cues had been different, whereas the factual group answered factual questions about the same demonstrations. Children then completed a causal learning task. Counterfactual pretraining enhanced levels of blocking as well as modus tollens reasoning but only for the younger children. These findings provide new evidence for an important role for inferential reasoning in causal learning
    • …
    corecore