16,750 research outputs found

    On the factors causing processing difficulty of multiple-scene displays

    Get PDF
    Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings

    Training methods for facial image comparison: a literature review

    Get PDF
    This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, Bülthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie

    Probing the time course of facilitation and inhibition in gaze cueing of attention in an upper-limb reaching task

    Get PDF
    Previous work has revealed that social cues, such as gaze and pointed fingers, can lead to a shift in the focus of another person’s attention. Research investigating the mechanisms of these shifts of attention has typically employed detection or localization button-pressing tasks. Because in-depth analyses of the spatiotemporal characteristics of aiming movements can provide additional insights into the dynamics of the processing of stimuli, in the present study we used a reaching paradigm to further explore the processing of social cues. In Experiments 1 and 2, participants aimed to a left or right location after a nonpredictive eye gaze cue toward one of these target locations. Seven stimulus onset asynchronies (SOAs), from 100 to 2,400 ms, were used. Both the temporal (reaction time, RT) and spatial (initial movement angle, IMA) characteristics of the movements were analyzed. RTs were shorter for cued (gazed-at) than for uncued targets across most SOAs. There were, however, no statistical differences in IMAs between movements to cued and uncued targets, suggesting that action planning was not affected by the gaze cue. In Experiment 3, the social cue was a finger pointing to one of the two target locations. Finger-pointing cues generated significant cueing effects in both RTs and IMAs. Overall, these results indicate that eye gaze and finger-pointing social cues are processed differently. Perception–action coupling (i.e., a tight link between the response and the social cue that is presented) might play roles in both the generation of action and the deviation of trajectories toward cued and uncued targets

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Mapping dynamic interactions among cognitive biases in depression

    Get PDF
    Depression is theorized to be caused in part by biased cognitive processing of emotional information. Yet, prior research has adopted a reductionist approach that does not characterize how biases in cognitive processes such as attention and memory work together to confer risk for this complex multifactorial disorder. Grounded in affective and cognitive science, we highlight four mechanisms to understand how attention biases, working memory difficulties, and long-term memory biases interact and contribute to depression. We review evidence for each mechanism and highlight time- and context-dependent dynamics. We outline methodological considerations and recommendations for research in this area. We conclude with directions to advance the understanding of depression risk, cognitive training interventions, and transdiagnostic properties of cognitive biases and their interactions

    Learning from where ‘eye’ remotely look or point: impact on number line estimation error in adults.

    Get PDF
    In this paper we present an investigation into the use of visual cues during number line estimation, and their influence on cognitive processes for reducing number line estimation error. Participants completed a 0-1000 number line estimation task pre and post a brief intervention in which they observed static-visual or dynamicvisual cues (control, anchor, gaze cursor, mouse cursor) and also made estimation marks to test effective number-target estimation. Results indicated that a significant pre-test to post-test reduction in estimation error was present for dynamic visual cues of modelled eye-gaze and mouse-cursor. However, there was no significant performance difference between pre and post-test for the control or static anchor conditions. Findings are discussed in relation to the extent to which anchor points alone are meaningful in promoting successful segmentation of the number line, and whether dynamic cues promote the utility of these locations in reducing error through attentional guidance

    Exploring the nature of joint attention impairments in young children with autism spectrum disorder: associated social and cognitive skills

    Get PDF
    It is generally accepted that joint attention skills are impaired in children with autism spectrum disorder (ASD). In this study, social preference, attention disengagement and intention understanding, assumed to be associated with the development of joint attention, are explored in relation to joint attention skills in children with ASD at the age of 36 months. Response to joint attention was related to intention understanding, whereas the number of joint attention initiations was associated with attention disengagement, and somewhat less stronger with social preference. The level on which children initiated joint attention was related to social preference. Possible interpretations of these findings are discussed

    Comparing the E-Z Reader Model to Other Models of Eye Movement Control in Reading

    Get PDF
    The E-Z Reader model provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. Thus, in contrast to other reading models reviewed in this article, E-Z Reader can simultaneously account for many of the known effects of linguistic, visual, and oculomotor factors on eye movement control during reading. Furthermore, the core principles of the model have been generalized to other task domains (e.g., equation solving, visual search), and are broadly consistent with what is known about the architecture of the neural systems that support reading
    corecore