14 research outputs found

    The Nature of Decision-Making: Human Behavior vs. Machine Learning

    Get PDF
    Artificial agents have often been compared to humans in their ability to categorize images or play strategic games. However, comparisons between human and artificial agents are frequently based on the overall performance on a particular task, and not necessarily on the specifics of how each agent behaves. In this study, we directly compared human behaviour with a reinforcement learning (RL) model. Human participants and an RL agent navigated through different grid world environments with high- and low- value targets. The artificial agent consisted of a deep neural network trained to map pixel input of a 27x27 grid world into cardinal directions using RL. An epsilon greedy policy was used to maximize reward. Behaviour of both agents was evaluated on four different conditions. Results showed both humans and RL agents consistently chose the higher reward over a lower reward, demonstrating an understanding of the task. Though both humans and RL agents consider movement cost for reward, the machine agent considers the movement costs more, trading off the effort with reward differently than humans. We found humans and RL agents both consider long-term rewards as they navigate through the world, yet unlike humans, the RL model completely disregards limitations in movements (e.g. how many total moves received). Finally, we rotated pseudorandom grid arrangements to study how decisions change with visual differences. We unexpectedly found that the RL agent changed its behaviour due to visual rotations, yet remained less variable than humans. Overall, the similarities between humans and the RL agent shows the potential RL agents have of being an adequate model of human behaviour. Additionally, the differences between human and RL agents suggest improvements to RL methods that may improve their performance. This research compares the human mind with artificial intelligence, creating the opportunity for future innovation

    Adaptive patch foraging in deep reinforcement learning agents

    Full text link
    Patch foraging is one of the most heavily studied behavioral optimization challenges in biology. However, despite its importance to biological intelligence, this behavioral optimization problem is understudied in artificial intelligence research. Patch foraging is especially amenable to study given that it has a known optimal solution, which may be difficult to discover given current techniques in deep reinforcement learning. Here, we investigate deep reinforcement learning agents in an ecological patch foraging task. For the first time, we show that machine learning agents can learn to patch forage adaptively in patterns similar to biological foragers, and approach optimal patch foraging behavior when accounting for temporal discounting. Finally, we show emergent internal dynamics in these agents that resemble single-cell recordings from foraging non-human primates, which complements experimental and theoretical work on the neural mechanisms of biological foraging. This work suggests that agents interacting in complex environments with ecologically valid pressures arrive at common solutions, suggesting the emergence of foundational computations behind adaptive, intelligent behavior in both biological and artificial agents.Comment: Published in Transactions on Machine Learning Research (TMLR). See: https://openreview.net/pdf?id=a0T3nOP9s

    Motivated cognition: effects of reward, emotion, and other motivational factors across a variety of cognitive domains

    Get PDF
    A growing body of literature has demonstrated that motivation influences cognitive processing. The breadth of these effects is extensive and span influences of reward, emotion, and other motivational processes across all cognitive domains. As examples, this scope includes studies of emotional memory, value-based attentional capture, emotion effects on semantic processing, reward-related biases in decision making, and the role of approach/avoidance motivation on cognitive scope. Additionally, other less common forms of motivation–cognition interactions, such as self-referential and motoric processing can also be considered instances of motivated cognition. Here I outline some of the evidence indicating the generality and pervasiveness of these motivation influences on cognition, and introduce the associated ‘research nexus’ at Collabra: Psychology

    Primate-like perceptual decision making through deep recurrent reinforcement learning

    No full text
    corecore