84,502 research outputs found
Recommended from our members
The value of novelty in schizophrenia
Influential models of schizophrenia suggest that patients experience incoming stimuli as excessively novel and motivating, with important consequences for hallucinatory experience and delusional belief. However, whether schizophrenia patients exhibit excessive novelty value and whether this interferes with adaptive behaviour has not yet been formally tested. Here, we employed a three-armed bandit task to investigate this hypothesis. Schizophrenia patients and healthy controls were first familiarised with a group of images and then asked to repeatedly choose between familiar and unfamiliar images associated with different monetary reward probabilities. By fitting a reinforcement-learning model we were able to estimate the values attributed to familiar and unfamiliar images when first presented in the context of the decision-making task. In line with our hypothesis, we found increased preference for newly introduced images (irrespective of whether these were familiar or unfamiliar) in patients compared to healthy controls and this to correlate with severity of hallucinatory experience. In addition, we found a correlation between value assigned to novel images and task performance, suggesting that excessive novelty value may interfere with optimal learning in patients, putatively through the disruption of the mechanisms regulating exploration versus exploitation. Our results suggest excessive novelty value in patients, whereby even previously seen stimuli acquire higher value as the result of their exposure in a novel context – a form of ‘hyper novelty’ which may explain why patients are often attracted by familiar stimuli experienced as new
Recommended from our members
What motivates academic dishonesty in students? A reinforcement sensitivity theory explanation
BACKGROUND: Academic dishonesty (AD) is an increasing challenge for universities worldwide. The rise of the Internet has further increased opportunities for students to cheat.
AIMS: In this study, we investigate the role of personality traits defined within Reinforcement Sensitivity Theory (RST) as potential determinants of AD. RST defines behaviour as resulting from approach (Reward Interest/reactivity, goal-drive, and Impulsivity) and avoidance (behavioural inhibition and Fight-Flight-Freeze) motivations. We further consider the role of deep, surface, or achieving study motivations in mediating/moderating the relationship between personality and AD.
SAMPLE: A sample of UK undergraduates (N = 240).
METHOD: All participants completed the RST Personality Questionnaire, a short-form version of the study process questionnaire and a measure of engagement in AD, its perceived prevalence, and seriousness.
RESULTS: Results showed that RST traits account for additional variance in AD. Mediation analysis suggested that GDP predicted dishonesty indirectly via a surface study approach while the indirect effect via deep study processes suggested dishonesty was not likely. Likelihood of engagement in AD was positively associated with personality traits reflecting Impulsivity and Fight-Flight-Freeze behaviours. Surface study motivation moderated the Impulsivity effect and achieving motivation the FFFS effect such that cheating was even more likely when high levels of these processes were used.
CONCLUSIONS: The findings suggest that motivational personality traits defined within RST can explain variance in the likelihood of engaging in dishonest academic behaviours
Learning action-oriented models through active inference
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms
Towards Question-based Recommender Systems
Conversational and question-based recommender systems have gained increasing
attention in recent years, with users enabled to converse with the system and
better control recommendations. Nevertheless, research in the field is still
limited, compared to traditional recommender systems. In this work, we propose
a novel Question-based recommendation method, Qrec, to assist users to find
items interactively, by answering automatically constructed and algorithmically
chosen questions. Previous conversational recommender systems ask users to
express their preferences over items or item facets. Our model, instead, asks
users to express their preferences over descriptive item features. The model is
first trained offline by a novel matrix factorization algorithm, and then
iteratively updates the user and item latent factors online by a closed-form
solution based on the user answers. Meanwhile, our model infers the underlying
user belief and preferences over items to learn an optimal question-asking
strategy by using Generalized Binary Search, so as to ask a sequence of
questions to the user. Our experimental results demonstrate that our proposed
matrix factorization model outperforms the traditional Probabilistic Matrix
Factorization model. Further, our proposed Qrec model can greatly improve the
performance of state-of-the-art baselines, and it is also effective in the case
of cold-start user and item recommendations.Comment: accepted by SIGIR 202
- …