557 research outputs found

    Searching in CCTV : effects of organisation in the multiplex

    Get PDF
    Acknowledgements The author wishes to thank Dr Kenneth Scott-Brown for his comments on an earlier version of this manuscript. Data were collected with the assistance of three groups of third-year undergraduate psychology students. Funding This study is not associated with any external fundingPeer reviewedPublisher PD

    On the factors causing processing difficulty of multiple-scene displays

    Get PDF
    Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings

    Editorial

    Get PDF
    Editorial to the Special Issue on Perception of Natural Scene

    Priorities for representation : Task settings and object interaction both influence object memory

    Get PDF
    Portions of this research were presented at the Experimental Psychological Society conference at the University of Kent (May, 2014).The first author is supported by a studentship provided by the University of Dundee. This study was conducted as part of the requirements for the degree of Doctor of Philosophy by the first author.Peer reviewedPostprin

    Perception of the visual environment

    Get PDF
    The eyes are the front end to the vast majority of the human behavioural repertoire. The manner in which our eyes sample the environment places fundamental constraints upon the information that is available for subsequent processing in the brain: the small window of clear vision at the centre of gaze can only be directed at an average of about three locations in the environment in every second. We are largely unaware of these continual movements, making eye movements a valuable objective measure that can provide a window into the cognitive processes underlying many of our behaviours. The valuable resource of high quality vision must be allocated with care in order to provide the right information at the right time for the behaviours we engage in. However, the mechanisms that underlie the decisions about where and when to move the eyes remain to be fully understood. In this chapter I consider what has been learnt about targeting the eyes in a range of different experimental paradigms, from simple stimuli arrays of only a few isolated targets, to complex arrays and photographs of real environments, and finally to natural task settings. Much has been learnt about how we view photographs, and current models incorporate low-level image salience, motor biases to favour certain ways of moving the eyes, higher-level expectations of what objects look like and expectations about where we will find objects in a scene. Finally in this chapter I will consider the fate of information that has received overt visual attention. While much of the detailed information from what we look at is lost, some remains, yet our understanding of what we retain and the factors that govern what is remembered and what is forgotten are not well understood. It appears that our expectations about what we will need to know later in the task are important in determining what we represent and retain in visual memory, and that our representations are shaped by the interactions that we engage in with objects

    Did Javal measure eye movements during reading?

    Get PDF
    Louis-Émile Javal is widely credited as the first person to record eye movements in read-ing. This is so despite the fact that Javal himself never made that claim but it is perpetu-ated in contemporary text books, scientific articles and on the internet. Javal did coin the term ‘saccades’ in the context of eye movements during reading but he did not measure them. In this article we suggest that a misreading of Huey’s (1908) book on reading led to the misattribution and we attempt to dispel this myth by explaining Javal’s contribution and also clarifying who did initially describe discontinuous eye movements during reading

    Language and gaze cues: findings from the real world and the lab

    Get PDF

    Language and gaze cues: findings from the real world and the lab

    Get PDF

    Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics

    Full text link
    Dozens of new models on fixation prediction are published every year and compared on open benchmarks such as MIT300 and LSUN. However, progress in the field can be difficult to judge because models are compared using a variety of inconsistent metrics. Here we show that no single saliency map can perform well under all metrics. Instead, we propose a principled approach to solve the benchmarking problem by separating the notions of saliency models, maps and metrics. Inspired by Bayesian decision theory, we define a saliency model to be a probabilistic model of fixation density prediction and a saliency map to be a metric-specific prediction derived from the model density which maximizes the expected performance on that metric given the model density. We derive these optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC, NSS, CC, SIM, KL-Div) and show that they can be computed analytically or approximated with high precision. We show that this leads to consistent rankings in all metrics and avoids the penalties of using one saliency map for all metrics. Our method allows researchers to have their model compete on many different metrics with state-of-the-art in those metrics: "good" models will perform well in all metrics.Comment: published at ECCV 201
    corecore