12 research outputs found

    Multimodal integration of spatial information: The influence of object-related factors and self-reported strategies

    Get PDF
    Spatial representations are a result of multisensory information integration. More recent findings suggest that the multisensory information processing of a scene can be facilitated when paired with a semantically congruent auditory signal. This congruency effect was taken as evidence that audio-visual integration occurs for complex scenes. As navigation in our environment consists of a seamless integration of complex sceneries, a fundamental question arises: how is human landmark-based wayfinding affected by multimodality? In order to address this question, two experiments were conducted in a virtual environment. The first experiment compared wayfinding and landmark recognition performance in unimodal visual and acoustic landmarks. The second experiment focused on the congruency of multimodal landmark combinations and additionally assessed subject’s self-reported strategies (i.e. whether they focused on direction sequences or landmarks). We demonstrate (1) the equality of acoustic and visual landmarks and (2) the congruency effect for the recognition of landmarks. Additionally, the results point out that self-reported strategies play a role and are an under-investigated topic in human landmark-based wayfinding

    Mean response accuracy of the landmark recognition task in Experiment 2.

    No full text
    <p>Please note that pooled data points are depicted for the experimental group, who conducted a role adoption task. The error bars represent the <i>SEM</i>.</p

    A rat in the sewer: How mental imagery interacts with object recognition

    Get PDF
    <div><p>The role of mental imagery has been puzzling researchers for more than two millennia. Both positive and negative effects of mental imagery on information processing have been discussed. The aim of this work was to examine how mental imagery affects object recognition and associative learning. Based on different perceptual and cognitive accounts we tested our <i>imagery-induced interaction hypothesis</i> in a series of two experiments. According to that, mental imagery could lead to (1) a superior performance in object recognition and associative learning if these objects are imagery-congruent (semantically) and to (2) an inferior performance if these objects are imagery-incongruent. In the first experiment, we used a static environment and tested associative learning. In the second experiment, subjects encoded object information in a dynamic environment by means of a virtual sewer system. Our results demonstrate that subjects who received a role adoption task (by means of guided mental imagery) performed better when imagery-congruent objects were used and worse when imagery-incongruent objects were used. We finally discuss our findings also with respect to alternative accounts and plead for a multi-methodological approach for future research in order to solve this issue.</p></div

    The perceptual cycle model.

    No full text
    <p>Adapted from Neisser, 1976.</p

    How spatial coding is affected by mid-level visual object properties within and outside of peripersonal space.

    No full text

    Learning (top) and testing phase (bottom) of Experiment 1.

    No full text
    <p>Landmarks (LM) were presented in combination with an arrow pointing either left, straight or right. During testing, landmarks were presented as a cue and subjects indicated the correct direction by pressing the corresponding arrow key.</p

    Learning and testing procedures of Experiment 2 with an exemplary landmark (rooster; user <i>nessmoon</i> at Morguefile.com).

    No full text
    <p><i>Top</i>: subjects were guided through a virtual environment and were instructed to learn a route. Landmarks were placed at each intersection. <i>Bottom</i>: Subjects had to indicate whether they had seen the landmark in the environment or not.</p

    How many factors are there in vision?

    No full text
    Common factors are ubiquitous in cognition, audition, and somato-sensation. Surprisingly, there seems to be no common factor in vision, i.e., visual tasks correlate very weakly with each other. Here, we show that there are likely many very specific factors in vision. First, we have previously shown that the Ebbinghaus illusion correlates very little with other spatial illusions (except the Ponzo illusion). Here, we measured illusion strength in the classic Ebbinghaus illusion with disks and compared it with the illusion magnitude of 18 versions of the Ebbinghaus illusion, for example, having squares rather than disks or moving rather than static disks. In addition, we asked observers to compare the size of two single disks. All versions of the Ebbinghaus illusion strongly correlated with each other but not with the single disks comparison task, even though the tasks are the same. Second, we previously presented 10 illusions, each with 4 different luminance conditions, including 2 iso-luminant ones. For all 10 illusions, the luminance conditions correlated highly with each other. Here, we re-analyzed the data and found that there were almost no correlations between the 10 illusions, except for the vertical and horizontal bisection illusion, which correlated strongly with each other for all 4 luminance conditions. In both bisection illusions, there are likely different neurons, sensitive to vertical and horizontal orientations, involved. Hence, variations of an illusion correlate highly with each other, however, it seems that each illusion makes up its own factor
    corecore