8 research outputs found

    Statistical Learning: The role of implicit and explicit processes in sequential regularities

    Get PDF
    Within cognitive psychology, Statistical Learning (SL) refers to our use of the statistical information available in our sensory environment to extract relationships between stimuli which unfold over time. SL enables us to use previous and current events to make predictions on upcoming ones, and it is at the basis of a number of cognitive functions (Bertels, Boursain, Destrebecqz, & Gaillard, 2015a). Aiming to address the lack of systematic investigations in the area, this thesis is concerned with the type of knowledge which results from auditory and visual SL, and whether it is implicit, unconscious, or explicit, conscious. We aimed to address existing methodological challenges around the measurement of conscious knowledge through the adoption of a novel use of the Process Dissociation Procedure in the context of a forced-choice task, in combination with the guessing and zero correlation criteria. Chapter 2 established the use of measures to assess the status of conscious knowledge of auditory stimuli generated through a transition matrix. We found successful learning of these stimuli in both adults and children, and that both age groups develop an awareness of the knowledge that they had acquired. In Chapter 3 we studied knowledge status in a triplet learning paradigm in both the auditory and visual modality. Our measures indicated that participants were fully aware of the visual and auditory stimuli learned. Chapter 4 was aimed at validating and consolidating our findings of explicit knowledge by using a combination of direct and indirect measures in a visual triplet learning paradigm, and additionally compared adults and children. The knowledge acquired was prevalently explicit in both age groups. We also found that participants’ awareness of the acquired knowledge did not coincide with the ability to reproduce the training material in a generation task. Chapter 5 investigated the hypothesis that implicit and explicit knowledge are dependent on the speed of stimulus presentation. We found that, although statistical learning can take place at different presentation speeds, participants’ knowledge is weaker at a faster speed. Knowledge appeared more implicit at faster stimulus presentation speeds and more explicit at slower speeds. In Chapter 6 we investigated the electrophysiological correlates of implicit and explicit knowledge in visual statistical learning. There were no differences in in ERPs between implicitly and explicitly-learned stimuli in either learners or non-learners within our sample. However, we found suggestions that a learning effect may be present and detectable through the ERPs in the absence of above-chance behavioural performance. This PhD builds on, and extends, the existing literature, and sheds light on the theoretical and methodological challenges inherent to the behavioural approach in statistical learning. We put forward the hypothesis that knowledge measured behaviourally tends to become more explicit, the greater the learning effect, and that contradictions between measures of conscious knowledge arise in the presence of low learning. We explore promising approaches for future research to advance knowledge about statistical learning and the type of knowledge acquired, and we make a case for the use of combined electrophysiological and behavioural methods

    Commentary: Musicians' online performance during auditory and visual statistical learning tasks

    Get PDF
    A commentary on: Musicians' Online Performance during Auditory and Visual Statistical Learning Tasks by Mandikal Vasuki, P. R., Sharma, M., Ibrahim, R. K., and Arciuli, J. (2017). Front. Hum. Neurosci. 11:114. doi: 10.3389/fnhum.2017.00114 Statistical learning (SL) is the extraction of the underlying statistical structure from sensory input (Frost et al., 2015). The extent to which this ability is domain-general (with a single central mechanism underpinning SL in any modality) or domain-specific (where the SL mechanism differs by modality) remains a central question in statistical learning (Frost et al., 2015), and two approaches have been adopted to tackle this. First is to examine the extent to which predominantly domain-specific skills such as language proficiency (Arciuli and von Koss Torkildsen, 2012) and musical expertise (Schön and François, 2011), and domain-general skills such as working memory and general IQ (Siegelman and Frost, 2015), correlate with SL ability. Second is to compare SL performance across modalities, or even examine cross-modal transfer (Durrant et al., 2016). Mandikal Vasuki et al. (2017) (and the sister paper: Mandikal Vasuki et al., 2016) make an important contribution by adopting both of these approaches. They compare auditory and visual SL using the Saffran triplet learning paradigm (Saffran et al., 1999) in musicians and non-musicians. The three key findings are that musicians are better than non-musicians at segmentation of auditory stimuli only, there is no correlation between auditory and visual performance, and that auditory performance is better overall. This last result could be due to privileged auditory processing of sequential stimuli (Conway et al., 2009), or it could just reflect differences in perceptual or memory capabilities across modalities. However, the fact that SL performance in one modality does not predict performance in another is hard to explain if a single mechanism underlying both is posited. Combined with the fact that overall better performance was found in musicians only in the auditory modality, a domain-specific SL mechanism seems to offer the most parsimonious explanation of this data

    Distributed under Creative Commons CC-BY 4.0 What has been missed for predicting human attention in viewing driving clips?

    No full text
    ABSTRACT Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatiotemporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the 'ground truth' to examine the predictive ability of a number of stateof-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias. Subjects Psychiatry and Psycholog

    Eye Fixation Location Recommendation in Advanced Driver Assistance System

    No full text
    Recent research progress on the approach of visual attention modeling for mediated perception to advanced driver assistance system (ADAS) has drawn the attention of computer and human vision researchers. However, it is still debatable whether the actual driver’s eye fixation locations (EFLs) or the predicted EFLs which are calculated by computational visual attention models (CVAMs) are more reliable for safe driving under real-life driving conditions. We analyzed the suitability of the following two EFLs using ten typical categories of natural driving video clips: the EFLs of human drivers and the EFLs predicted by CVAMs. In the suitability analysis, we used the EFLs confirmed by two experienced drivers as the reference EFLs. We found that both approaches alone are not suitable for safe driving and EFL suitable for safe driving depends on the driving conditions. Based on this finding, we propose a novel strategy for recommending one of the EFLs to the driver for ADAS under predefined 10 real-life driving conditions. We propose to recommend one of the following 3 EFL modes for different driving conditions: driver’s EFL only, CVAM’s EFL only, and interchangeable EFL. In interchangeable EFL mode, driver’s EFL and CVAM’s EFL are interchangeable. The selection of two EFLs is a typical binary classification problem, so we apply support vector machines (SVMs) to solve this problem. We also provide a quantitative evaluation of the classifiers. The performance evaluation of the proposed recommendation method indicates that it is potentially useful to ADAS for future safe driving

    What has been missed for real life driving? an inspirational thinking from human innate biases

    No full text
    Nature is gorgeous for her imbalance. The innate bias from non-human to human results in a wonderful yet mysterious biological foundation towards the inspirational thinking for real application. The vision researchers evidenced that the left gaze bias in humans and non-humans. Nevertheless, the acousticians observed the right ear advantages in both non-humans and humans. Unlike the vision and acoustician researchers investigating the underlying mechanisms of human innate bias, we are more interested in mimicking these characteristics. In this paper, we propose two simple yet effective methods to generate the left eye gaze bias and the right ear advantage. We further discuss the potential applications, e.g., real life driving, from these inherent phenomena. We believe that this paper could bring an inspirational impact for future cognitive transportation, by implementing these human innate biases properly
    corecore