50 research outputs found

    Attention allocation in complementary joint action: How joint goals affect spatial orienting

    Get PDF
    When acting jointly, individuals often attend and respond to the same object or spatial location in complementary ways (e.g., when passing a mug, one person grasps its handle with a precision grip; the other receives it with a whole-hand grip). At the same time, the spatial relation between individuals’ actions affects attentional orienting: one is slower to attend and respond to locations another person previously acted upon than to alternate locations (“social inhibition of return”, social IOR). Achieving joint goals (e.g., passing a mug), however, often requires complementary return responses to a co-actor’s previous location. This raises the question of whether attentional orienting, and hence the social IOR, is affected by the (joint) goal our actions are directed at. The present study addresses this question. Participants responded to cued locations on a computer screen, taking turns with a virtual co-actor. They pursued either an individual goal or performed complementary actions with the co-actor, in pursuit of a joint goal. Four experiments showed that the social IOR was significantly modulated when participant and co-actor pursued a joint goal. This suggests that attentional orienting is affected not only by the spatial but also by the social relation between two agents’ actions. Our findings thus extend research on interpersonal perception-action effects, showing that the way another agent’s perceived action shapes our own depends on whether we share a joint goal with that agent

    The impact of joint attention on the sound-induced flash illusions

    Get PDF
    Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration

    Coordination effort in joint action is reflected in pupil size

    Get PDF
    Humans often perform visual tasks together, and when doing so, they tend to devise division of labor strategies to share the load. Implementing such strategies, however, is effortful as co-actors need to coordinate their actions. We tested if pupil size - a physiological correlate of mental effort - can detect such a coordination effort in a multiple object tracking task (MOT). Participants performed the MOT task jointly with a computer partner and either devised a division of labor strategy (main experiment) or the labor division was already pre-determined (control experiment). We observed that pupil sizes increase relative to performing the MOT task alone in the main experiment while this is not the case in the control experiment. These findings suggest that pupil size can detect a rise in coordination effort, extending the view that pupil size indexes mental effort across a wide range of cognitively demanding tasks

    The Social Situation Affects How We Process Feedback About Our Actions

    Get PDF
    Humans achieve their goals in joint action tasks either by cooperation or competition. In the present study, we investigated the neural processes underpinning error and monetary rewards processing in such cooperative and competitive situations. We used electroencephalography (EEG) and analyzed event-related potentials (ERPs) triggered by feedback in both social situations. 26 dyads performed a joint four-alternative forced choice (4AFC) visual task either cooperatively or competitively. At the end of each trial, participants received performance feedback about their individual and joint errors and accompanying monetary rewards. Furthermore, the outcome, i.e., resulting positive, negative, or neutral rewards, was dependent on the pay-off matrix, defining the social situation either as cooperative or competitive. We used linear mixed effects models to analyze the feedback-related-negativity (FRN) and used the Threshold-free cluster enhancement (TFCE) method to explore activations of all electrodes and times. We found main effects of the outcome and social situation, but no interaction at mid-line frontal electrodes. The FRN was more negative for losses than wins in both social situations. However, the FRN amplitudes differed between social situations. Moreover, we compared monetary with neutral outcomes in both social situations. Our exploratory TFCE analysis revealed that processing of feedback differs between cooperative and competitive situations at right temporo-parietal electrodes where the cooperative situation elicited more positive amplitudes. Further, the differences induced by the social situations were stronger in participants with higher scores on a perspective taking test. In sum, our results replicate previous studies about the FRN and extend them by comparing neurophysiological responses to positive and negative outcomes in a task that simultaneously engages two participants in competitive and cooperative situations

    When eyes beat lips: speaker gaze affects audiovisual integration in the McGurk illusion

    Get PDF
    Eye contact is a dynamic social signal that captures attention and plays a critical role in human communication. In particular, direct gaze often accompanies communicative acts in an ostensive function: a speaker directs her gaze towards the addressee to highlight the fact that this message is being intentionally communicated to her. The addressee, in turn, integrates the speaker’s auditory and visual speech signals (i.e., her vocal sounds and lip movements) into a unitary percept. It is an open question whether the speaker’s gaze affects how the addressee integrates the speaker’s multisensory speech signals. We investigated this question using the classic McGurk illusion, an illusory percept created by presenting mismatching auditory (vocal sounds) and visual information (speaker’s lip movements). Specifically, we manipulated whether the speaker (a) moved his eyelids up/down (i.e., open/closed his eyes) prior to speaking or did not show any eye motion, and (b) spoke with open or closed eyes. When the speaker’s eyes moved (i.e., opened or closed) before an utterance, and when the speaker spoke with closed eyes, the McGurk illusion was weakened (i.e., addressees reported significantly fewer illusory percepts). In line with previous research, this suggests that motion (opening or closing), as well as the closed state of the speaker’s eyes, captured addressees’ attention, thereby reducing the influence of the speaker’s lip movements on the addressees’ audiovisual integration process. Our findings reaffirm the power of speaker gaze to guide attention, showing that its dynamics can modulate low-level processes such as the integration of multisensory speech signals

    Let's Move It Together: A Review of Group Benefits in Joint Object Control

    Get PDF
    In daily life, humans frequently engage in object-directed joint actions, be it carrying a table together or jointly pulling a rope. When two or more individuals control an object together, they may distribute control by performing complementary actions, e.g., when two people hold a table at opposite ends. Alternatively, several individuals may execute control in a redundant manner by performing the same actions, e.g., when jointly pulling a rope in the same direction. Previous research has investigated whether dyads can outperform individuals in tasks where control is either distributed or redundant. The aim of the present review is to integrate findings for these two types of joint control to determine common principles and explain differing results. In sum, we find that when control is distributed, individuals tend to outperform dyads or attain similar performance levels. For redundant control, conversely, dyads have been shown to outperform individuals. We suggest that these differences can be explained by the possibility to freely divide control: Having the option to exercise control redundantly allows co-actors to coordinate individual contributions in line with individual capabilities, enabling them to maximize the benefit of the available skills in the group. In contrast, this freedom to adopt and adapt customized coordination strategies is not available when the distribution of control is determined from the outset

    The impact of joint attention on the sound-induced flash illusions

    Get PDF
    Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration

    Learning new sensorimotor contingencies:Effects of long-term use of sensory augmentation on the brain and conscious perception

    Get PDF
    Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation
    corecore