Visual Statistical Learning (VSL) is classically investigated in a restricted format, either as temporal or spatial VSL, and void of any effect or bias due to context. However, in real-world environments, spatial patterns unfold over time, leading to a fundamental intertwining between spatial and temporal regularities. In addition, their interpretation is heavily influenced by contextual information through internal biases encoded at different scales. Using a novel spatio-temporal VSL setup, we explored this interdependence between time, space, and biases by moving spatially defined patterns in and out of participants' views over time in the presence or absence of occluders. First, we replicated the classical VSL results in such a mixed setup. Next, we obtained evidence that purely temporal statistics can be used for learning spatial patterns through internal inference. Finally, we found that motion-defined and occlusion-related context jointly and strongly modulated which temporal and spatial regularities were automatically learned from the same visual input. Overall, our findings expand the conceptualization of VSL from a mechanistic recorder of low-level spatial and temporal co-occurrence statistics of single visual elements to a complex interpretive process that integrates low-level spatio-temporal information with higher-level internal biases to infer the general underlying structure of the environment
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.