Automated online and App-based cognitive assessment tasks are becoming
increasingly popular in large-scale cohorts and biobanks due to advantages in
affordability, scalability and repeatability. However, the summary scores that
such tasks generate typically conflate the cognitive processes that are the
intended focus of assessment with basic visuomotor speeds, testing device
latencies and speed-accuracy tradeoffs. This lack of precision presents a
fundamental limitation when studying brain-behaviour associations. Previously,
we developed a novel modelling approach that leverages continuous performance
recordings from large-cohort studies to achieve an iterative decomposition of
cognitive tasks (IDoCT), which outputs data-driven estimates of cognitive
abilities, and device and visuomotor latencies, whilst recalibrating
trial-difficulty scales. Here, we further validate the IDoCT approach with UK
BioBank imaging data. First, we examine whether IDoCT can improve ability
distributions and trial-difficulty scales from an adaptive picture-vocabulary
task (PVT). Then, we confirm that the resultant visuomotor and cognitive
estimates associate more robustly with age and education than the original PVT
scores. Finally, we conduct a multimodal brain-wide association study with
free-text analysis to test whether the brain regions that predict the IDoCT
estimates have the expected differential relationships with visuomotor vs.
language and memory labels within the broader imaging literature. Our results
support the view that the rich performance timecourses recorded during
computerised cognitive assessments can be leveraged with modelling frameworks
like IDoCT to provide estimates of human cognitive abilities that have superior
distributions, re-test reliabilities and brain-wide associations