3,815 research outputs found

    Artificially created stimuli produced by a genetic algorithm using a saliency model as its fitness function show that Inattentional Blindness modulates performance in a pop-out visual search paradigm

    Get PDF
    Salient stimuli are more readily detected than less salient stimuli, and individual differences in such detection may be relevant to why some people fail to notice an unexpected stimulus that appears in their visual field whereas others do notice it. This failure to notice unexpected stimuli is termed 'Inattentional Blindness' and is more likely to occur when we are engaged in a resource-consuming task. A genetic algorithm is described in which artificial stimuli are created using a saliency model as its fitness function. These generated stimuli, which vary in their saliency level, are used in two studies that implement a pop-out visual search task to evaluate the power of the model to discriminate the performance of people who were and were not Inattentionally Blind (IB). In one study the number of orientational filters in the model was increased to check if discriminatory power and the saliency estimation for low-level images could be improved. Results show that the performance of the model does improve when additional filters are included, leading to the conclusion that low-level images may require a higher number of orientational filters for the model to better predict participants' performance. In both studies we found that given the same target patch image (i.e. same saliency value) IB individuals take longer to identify a target compared to non-IB individuals. This suggests that IB individuals require a higher level of saliency for low-level visual features in order to identify target patches

    Eye movements in time : auditory influences on oculomotor timing

    Get PDF
    The dominant models of eye movement timing consider only visual factors as modulators of when gaze orients (e.g. EZ-Reader, SWIFT, CRISP, LATEST). Yet realworld perception is multimodal, and temporal information from audition can both aid the predictive orienting of gaze (to relevant audiovisual onsets in time), and inform visual orientation decisions known to modulate saccade timing, e.g. where to orient. The aim of this thesis was to further the current understanding of eye movement timing to incorporate auditory information; specifically investigating the implicit and explicit capacity for musical beats to influence (and entrain) eye movements, and to quantify the capacity and limitations of direct control when volitionally matching eye movements to auditory onsets. To achieve this, a highly-simplified gaze-contingent visual search paradigm was refined that minimised visual and task factors in order to measure auditory influence. The findings of this thesis present evidence that self-paced eye movements are impervious to implicit auditory influences. The explicit control of eye movements, as small corrections in time to align with similarly timed music, was very limited. In contrast, when visual transitions were externally timed, audiovisual correspondence systematically delayed fixation durations. The thesis also measured the extent of direct control that can be exerted on eye movements, including the role of auditory feedback, as well as modulating visual complexity to further increase inhibition and temporal precision. These studies show a predictive relationship between the level of direct volitional control that an individual can affect and how synchronised they are. Additionally, these studies quantify a large subpopulation of quick eye movements that are impervious to direct control. These findings are discussed as provocation for revised oculomotor models, future work that considers the temporal relationship between shifts of attention and gaze, and implications for wider psychological research that employs timed eye movement measures

    Credit assignment in multiple goal embodied visuomotor behavior

    Get PDF
    The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract
    • …
    corecore