156,547 research outputs found

    Modelling Adaptation through Social Allostasis: Modulating the Effects of Social Touch with Oxytocin in Embodied Agents

    Get PDF
    Social allostasis is a mechanism of adaptation that permits individuals to dynamically adapt their physiology to changing physical and social conditions. Oxytocin (OT) is widely considered to be one of the hormones that drives and adapts social behaviours. While its precise effects remain unclear, two areas where OT may promote adaptation are by affecting social salience, and affecting internal responses of performing social behaviours. Working towards a model of dynamic adaptation through social allostasis in simulated embodied agents, and extending our previous work studying OT-inspired modulation of social salience, we present a model and experiments that investigate the effects and adaptive value of allostatic processes based on hormonal (OT) modulation of affective elements of a social behaviour. In particular, we investigate and test the effects and adaptive value of modulating the degree of satisfaction of tactile contact in a social motivation context in a small simulated agent society across different environmental challenges (related to availability of food) and effects of OT modulation of social salience as a motivational incentive. Our results show that the effects of these modulatory mechanisms have different (positive or negative) adaptive value across different groups and under different environmental circumstance in a way that supports the context-dependent nature of OT, put forward by the interactionist approach to OT modulation in biological agents. In terms of simulation models, this means that OT modulation of the mechanisms that we have described should be context-dependent in order to maximise viability of our socially adaptive agents, illustrating the relevance of social allostasis mechanisms.Peer reviewedFinal Published versio

    Mutual Alignment Transfer Learning

    Full text link
    Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach -- supplemental to fine tuning on the real robot -- to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent
    • …
    corecore