48 research outputs found

    “Two Minds Don’t Blink Alike”: The Attentional Blink Does Not Occur in a Joint Context

    Get PDF
    Typically, when two individuals perform a task together, each partner monitors the other partners' responses and goals to ensure that the task is completed efficiently. This monitoring is thought to involve a co-representation of the joint goals and task, as well as a simulation of the partners' performance. Evidence for such "co-representation" of goals and task, and "simulation" of responses has come from numerous visual attention studies in which two participants complete different components of the same task. In the present research, an adaptation of the attentional blink task was used to determine if co-representation could exert an influence over the associated attentional mechanisms. Participants completed a rapid serial visual presentation task in which they first identified a target letter (T1) and then detected the presence of the letter X (T2) presented one to seven letters after T1. In the individual condition, the participant identified T1 and then detected T2. In the joint condition, one participant identified T1 and the other participant detected T2. Across two experiments, an attentional blink (decreased accuracy in detecting T2 when presented three letters after T1) was observed in the individual condition, but not in joint conditions. A joint attentional blink may not emerge because the co-representation mechanisms that enable joint action exert a stronger influence at information processing stages that do not overlap with those that lead to the attentional blink

    Probing the time course of facilitation and inhibition in gaze cueing of attention in an upper-limb reaching task

    Get PDF
    Previous work has revealed that social cues, such as gaze and pointed fingers, can lead to a shift in the focus of another person’s attention. Research investigating the mechanisms of these shifts of attention has typically employed detection or localization button-pressing tasks. Because in-depth analyses of the spatiotemporal characteristics of aiming movements can provide additional insights into the dynamics of the processing of stimuli, in the present study we used a reaching paradigm to further explore the processing of social cues. In Experiments 1 and 2, participants aimed to a left or right location after a nonpredictive eye gaze cue toward one of these target locations. Seven stimulus onset asynchronies (SOAs), from 100 to 2,400 ms, were used. Both the temporal (reaction time, RT) and spatial (initial movement angle, IMA) characteristics of the movements were analyzed. RTs were shorter for cued (gazed-at) than for uncued targets across most SOAs. There were, however, no statistical differences in IMAs between movements to cued and uncued targets, suggesting that action planning was not affected by the gaze cue. In Experiment 3, the social cue was a finger pointing to one of the two target locations. Finger-pointing cues generated significant cueing effects in both RTs and IMAs. Overall, these results indicate that eye gaze and finger-pointing social cues are processed differently. Perception–action coupling (i.e., a tight link between the response and the social cue that is presented) might play roles in both the generation of action and the deviation of trajectories toward cued and uncued targets

    Grasping the concept of personal property.

    Get PDF
    The concept of property is integral to personal and societal development, yet understanding of the cognitive basis of ownership is limited. Objects are the most basic form of property, so our physical interactions with owned objects may elucidate nuanced aspects of ownership. We gave participants a coffee mug to decorate, use and keep. The experimenter also designed a mug of her own. In Experiment 1, participants performed natural lifting actions with each mug. Participants lifted the Experimenter's mug with greater care, and moved it slightly more towards the Experimenter, while they lifted their own mug more forcefully and drew it closer to their own body. In Experiment 2, participants responded to stimuli presented on the mug handles in a computer-based stimulus-response compatibility task. Overall, participants were faster to respond in trials in which the handles were facing in the same direction as the response location compared to when the handles were facing away. The compatibility effect was abolished, however, for the Experimenter's mug - as if the action system is blind to the potential for action towards another person's property. These findings demonstrate that knowledge of the ownership status of objects influences visuomotor processing in subtle and revealing ways

    It goes with the territory: Ownership across spatial boundaries.

    Get PDF
    Previous studies have shown that people are faster to process objects that they own as compared with objects that other people own. Yet object ownership is embedded within a social environment that has distinct and sometimes competing rules for interaction. Here we ask whether ownership of space can act as a filter through which we process what belongs to us. Can a sense of territory modulate the well-established benefits in information processing that owned objects enjoy? In 4 experiments participants categorized their own or another person’s objects that appeared in territories assigned either to themselves or to another. We consistently found that faster processing of self-owned than other-owned objects only emerged for objects appearing in the self-territory, with no such advantage in other territories. We propose that knowing whom spaces belong to may serve to define the space in which affordances resulting from ownership lead to facilitated processing

    Relevant for us? We-prioritization in cognitive processing

    Get PDF
    Humans are social by nature. We ask whether this social nature operates as a lens through which individuals process the world even in the absence of immediate interactions or explicit goals to collaborate. Is information that is potentially relevant to a group one belongs to (“We”) processed with priority over information potentially relevant to a group one does not belong to (“They”)? We conducted three experiments using a modified version of Sui, He, and Humphreys’ (2012) shape–label matching task. Participants were assigned to groups either via a common preference between assigned team members (Experiment 1) or arbitrarily (Experiment 2). In a third experiment, only personal pronouns were used. Overall, a processing benefit for we-related information (we-prioritization) occurred regardless of the type of group induction. A final experiment demonstrated that we-prioritization did not extend to other individual members of a short-term transitory group. We suggest that the results reflect an intrinsic predisposition to process information “relevant for us” with priority, which might feed into optimizing collaborative processes

    Do you see what I see? Co-actor posture modulates visual processing in joint tasks

    Get PDF
    Interacting with other people is a ubiquitous part of daily life. A complex set of processes enable our successful interactions with others. The present research was conducted to investigate how the processing of visual stimuli may be affected by the presence and the hand posture of a co-actor. Experiments conducted with participants acting alone have revealed that the distance from the stimulus to the hand of a participant can alter visual processing. In the main experiment of the present paper, we asked whether this posture-related source of visual bias persists when participants share the task with another person. The effect of personal and co-actor hand-proximity on visual processing was assessed through object-specific benefits to visual recognition in a task performed by two co-actors. Pairs of participants completed a joint visual recognition task and, across different blocks of trials, the position of their own hands and of their partner's hands varied relative to the stimuli. In contrast to control studies conducted with participants acting alone, an object-specific recognition benefit was found across all hand location conditions. These data suggest that visual processing is, in some cases, sensitive to the posture of a co-actor

    Eye movements may cause motor contagion effects

    Get PDF
    When a person executes a movement, the movement is more errorful while observing another person’s actions that are incongruent rather than congruent with the executed action. This effect is known as “motor contagion”. Accounts of this effect are often grounded in simulation mechanisms: increased movement error emerges because the motor codes associated with observed actions compete with motor codes of the goal action. It is also possible, however, that the increased movement error is linked to eye movements that are executed simultaneously with the hand movement because oculomotor and manual-motor systems are highly interconnected. In the present study, participants performed a motor contagion task in which they executed horizontal arm movements while observing a model making either vertical (incongruent) or horizontal (congruent) movements under three conditions: no instruction, maintain central fixation, or track the model’s hand with the eyes. A significant motor contagion-like effect was only found in the ‘track’ condition. Thus, ‘motor contagion’ in the present task may be an artifact of simultaneously executed incongruent eye movements. These data are discussed in the context of stimulation and associative learning theories, and raise eye movements as a critical methodological consideration for future work on motor contagion

    Enhancing surgical performance in cardiothoracic surgery with innovations from computer vision and artificial intelligence: a narrative review

    Get PDF
    When technical requirements are high, and patient outcomes are critical, opportunities for monitoring and improving surgical skills via objective motion analysis feedback may be particularly beneficial. This narrative review synthesises work on technical and non-technical surgical skills, collaborative task performance, and pose estimation to illustrate new opportunities to advance cardiothoracic surgical performance with innovations from computer vision and artificial intelligence. These technological innovations are critically evaluated in terms of the benefits they could offer the cardiothoracic surgical community, and any barriers to the uptake of the technology are elaborated upon. Like some other specialities, cardiothoracic surgery has relatively few opportunities to benefit from tools with data capture technology embedded within them (as is possible with robotic-assisted laparoscopic surgery, for example). In such cases, pose estimation techniques that allow for movement tracking across a conventional operating field without using specialist equipment or markers offer considerable potential. With video data from either simulated or real surgical procedures, these tools can (1) provide insight into the development of expertise and surgical performance over a surgeon’s career, (2) provide feedback to trainee surgeons regarding areas for improvement, (3) provide the opportunity to investigate what aspects of skill may be linked to patient outcomes which can (4) inform the aspects of surgical skill which should be focused on within training or mentoring programmes. Classifier or assessment algorithms that use artificial intelligence to ‘learn’ what expertise is from expert surgical evaluators could further assist educators in determining if trainees meet competency thresholds. With collaborative efforts between surgical teams, medical institutions, computer scientists and researchers to ensure this technology is developed with usability and ethics in mind, the developed feedback tools could improve cardiothoracic surgical practice in a data-driven way

    It is not in the details: Self-related shapes are rapidly classified but their features are not better remembered

    Get PDF
    Self-prioritization is a robust phenomenon whereby judgments concerning self-representational stimuli are faster than judgments toward other stimuli. The present paper examines if and how self-prioritization causes more vivid short-term memories for self-related objects by giving geometric shapes arbitrary identities (self, mother, stranger). In Experiment 1 participants were presented with an array of the three shapes and required to retain the location and color of each in memory. Participants were then probed regarding the identity of one of the shapes and subsequently asked to indicate the color of the probed shape or an unprobed shape on a color wheel. Results indicated no benefit for self-stimuli in either response time for the identification probe or for color fidelity in memory. Yet, a cuing benefit was observed such that the cued stimulus in the identity probe did have higher fidelity within memory. Experiments 2 and 3 reduced the cognitive load by only requiring that participants process the identity and color of one shape at a time. For Experiment 2, the identity probe was memory-based, whereas the stimulus was presented alongside the identity probe for Experiment 3. Results demonstrated a robust self-prioritization effect: self-related shapes were classified faster than non-self-shapes, but this self-advantage did not lead to an increase in the fidelity of memory for self-related shapes’ colors. Overall, these results suggest that self-prioritization effects may be restricted to an improvement in the ability to recognize that the self-representational stimulus is present without devoting more perceptual and short-term memory resources to such stimuli

    Self-bias effect: movement initiation to self-owned property is speeded for both approach and avoidance actions

    Get PDF
    Recall of, and physical interaction with, self-owned items is privileged over items owned by other people (Constable et al. in Cognition 119(3):430–437, 2011; Cunningham et al. in Conscious Cognit 17(1):312–318, 2008). Here, we investigate approach (towards the item), compared with avoidance (away from the item) movements to images of self- and experimenter-owned items. We asked if initiation time and movement duration of button-press approach responses to self-owned items are associated with a systematic self-bias (overall faster responses), compared with avoidance movements, similar to findings of paradigms investigating affective evaluation of (unowned) items. Participants were gifted mugs to use, and after a few days they completed an approach–avoidance task (Chen and Bargh in Pers Soc Psychol Bull 25(2):215–224, 1999; Seibt et al. in J Exp Soc Psychol 44:713–720, 2008; Truong et al. in J Exp Psychol Hum Percept Perform 42(3), 375-385, 2016) to images of their own or the experimenter’s mug, using either congruent or incongruent movement direction mappings. There was a self-bias effect for initiation time to the self-owned mug, for both congruent and incongruent mappings, and for movement duration in the congruent mapping. The effect was abolished in Experiment 2 when participants responded based on a shape on the handle rather than mug ownership. We speculate that ownership status requires conscious processing to modulate responses. Moreover, ownership status judgements and affective evaluation may employ different mechanisms
    corecore