21 research outputs found
Social Binding: Processing of Social Interactions in Visual Search, Working Memory and Longer-Term Memory.
The binding of features into perceptual wholes is a well-established phenomenon, which has previously only been studied in the context of early vision and low-level features, such as color or proximity. This thesis investigates the hypothesis that a similar binding process, based on higher level information, could bind people into interacting groups, facilitating faster processing and enhanced memory of social situations. To investigate this possibility, a series of different experimental approaches explores grouping effects in displays involving interacting people. Experiments 1 & 2 use a visual search task and demonstrate more rapid processing for interacting (versus non-interacting) pairs in an odd-quadrant paradigm. Experiments 3 & 4, using a spatial judgment task, show that interacting individuals are remembered as physically closer than non-interacting individuals while retrieval times are decreased for interacting pairs. Experiments 5, 6 & 7 show that memory retention of group-relevant and irrelevant features is enhanced when recalling interacting partners in a surprise memory task. But such retrieval is disrupted when features are misattributed between interacting partners. Finally, Experiments 8, 9 & 10 further investigate the involvement of higher level cognitive processes in these effects. The observed results are consistent with the social binding hypothesis, and alternative explanations based on low level perceptual features and attentional cueing effects are ruled out. This thesis concludes that automatic mid-level grouping processes bind individuals into groups on the basis of their perceived interaction. Such Social Binding could provide the basis for more sophisticated social processing. Identifying the automatic encoding of social interactions in visual search, distortions of spatial working memory, and facilitated retrieval of object properties from longer-term memory, opens new approaches to studying social cognition with possible practical applications
Recommended from our members
Why are social interactions found quickly in visual search tasks?
When asked to find a target dyad amongst non-interacting individuals, participants respond faster when the individuals in the target dyad are shown face-to-face (suggestive of a social interaction), than when they are presented back-to-back. Face-to-face dyads may be found faster because social interactions recruit specialized processing. However, human faces and bodies are salient directional cues that exert a strong influence on how observers distribute their attention. Here we report that a similar search advantage exists for āpoint-to-pointā and āpoint-to-faceā target arrangements constructed using arrows ā a non-social directional cue. These findings indicate that the search advantage seen for face-to-face dyads is a product of the directional cues present within arrangements, not the fact that they are processed as social interactions, per se. One possibility is that, when arranged in the face-to-face or point-to-point configuration, pairs of directional cues (faces, bodies, arrows) create an attentional āhot-spotā ā a region of space in between the elements to which attention is directed by multiple cues. Due to the presence of this hot-spot, observers' attention may be drawn to the target location earlier in a serial visual search
Recommended from our members
Visual search for facing and non-facing people: the effect of actor inversion
In recent years, there has been growing interest in how human observers perceive, attend to, and recall, social interactions viewed from third-person perspectives. One of the interesting findings to emerge from this new literature is the search advantage for facing dyads. When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Interestingly, the search advantage for facing dyads appears to be sensitive to the orientation of the people depicted. While front-to-front target pairs are found faster than back-to-back targets when target and distractor pairings are shown upright, front-to-front and back-to-back targets are found equally quickly when pairings are shown upside-down. In the present study, we sought to better understand why the search advantage for facing dyads is sensitive to the orientation of the people depicted. To begin, we show that the orientation sensitivity of the search advantage is seen with dyads constructed from faces only, and from bodies with the head and face occluded. We replicate these effects using two different visual search paradigms. We go on to show that individual faces and bodies, viewed in profile, produce strong attentional cueing effects when shown upright, but not when presented upside-down. Together with recent evidence that arrows arranged front-to-front also produce the search advantage for facing dyads, these findings support the view that the search advantage is a by-product of the ability of constituent elements to direct observersā visuo-spatial attention
Recommended from our members
Sensitivity to orientation is not unique to social attention cueing
Abstract: It is well-established that faces and bodies cue observersā visuospatial attention; for example, target items are found faster when their location is cued by the directionality of a task-irrelevant face or body. Previous results suggest that these cueing effects are greatly reduced when the orientation of the task-irrelevant stimulus is inverted. It remains unclear, however, whether sensitivity to orientation is a unique hallmark of āsocialā attention cueing or a more general phenomenon. In the present study, we sought to determine whether the cueing effects produced by common objects (power drills, desk lamps, desk fans, cameras, bicycles, and cars) are also attenuated by inversion. When cueing stimuli were shown upright, all six object classes produced highly significant cueing effects. When shown upside-down, however, the results were mixed. Some of the cueing effects (e.g., those induced by bicycles and cameras) behaved liked faces and bodies: they were greatly reduced by orientation inversion. However, other cueing effects (e.g., those induced by cars and power drills) were insensitive to orientation: upright and inverted exemplars produced significant cueing effects of comparable strength. We speculate that (i) cueing effects depend on the rapid identification of stimulus directionality, and (ii) some cueing effects are sensitive to orientation because upright exemplars of those categories afford faster processing of directionality, than inverted exemplars. Contrary to the view that attenuation-by-inversion is a unique hallmark of social attention, our findings indicate that some non-social cueing effects also exhibit sensitivity to orientation
Recommended from our members
Objects that direct visuospatial attention produce the search advantage for facing dyads
When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Two rival explanations have been advanced to explain this search advantage for facing dyads. According to one account, the search advantage reflects the fact that front-to-front targets engage domain-specific social interaction processing that helps stimuli compete more effectively for limited attentional resources. Another view is that the effect is a by-product of the ability of individual heads and bodies to direct observersā visuospatial attention. Here, we describe a two-part investigation that sought to test these accounts. First, we found that it is possible to replicate the search advantage with non-social objects. Next, we employed a cueing paradigm to investigate whether it is the ability of individual items to direct observersā visuospatial attention that determines if an object category produces the search advantage for facing dyads. We found that the strength of the cueing effect produced by an object category correlated closely with the strength of the search advantage produced by that object category. Taken together, these results provide strong support for the directional cueing account
Recommended from our members
Searching for people: non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back
Contextual modulation of appearance-trait learning
When we encounter a stranger for the first time, we spontaneously attribute to them a wide variety of character traits based on their facial appearance. There is increasing consensus that learning plays a key role in these first impressions. According to the Trait Inference Mapping (TIM) model, first impressions are the products of mappings between āface spaceā and ātrait spaceā acquired through domain-general associative processes. Drawing on the associative learning literature, TIM predicts that first-learned associations between facial appearance and character will be particularly influential: they will be difficult to unlearn and will be more likely to generalise to novel contexts than appearance-trait associations acquired subsequently. The study of face-trait learning de novo is complicated by the fact that participants, even young children, already have extensive experience with faces before they enter the lab. This renders the study of first-learned associations from faces intractable. Here, we overcome this problem by using Greebles ā a class of novel synthetic objects about which participants had no previous knowledge or preconceptions ā as a proxy for faces. In four experiments (total N = 640) with adult participants we adapt classic AB-A and AB-C renewal paradigms to study appearance-trait learning. Our results indicate that appearance-trait associations are subject to contextual control, and are resistant to counter-stereotypical experience
Rapid detection of social interactions is the result of domain general attentional processes
Using visual search displays of interacting and non-interacting pairs, it has been demonstrated that detection of social interactions is facilitated. For example, two people facing each other are found faster than two people with their backs turned: an effect that may reflect social binding. However, recent work has shown the same effects with non-social arrow stimuli, where towards facing arrows are detected faster than away facing arrows. This latter work suggests a primary mechanism is an attention orienting process driven by basic low-level direction cues. However, evidence for lower level attentional processes does not preclude a potential additional role of higher-level social processes. Therefore, in this series of experiments we test this idea further by directly comparing basic visual features that orient attention with representations of socially interacting individuals. Results confirm the potency of orienting of attention via low-level visual features in the detection of interacting objects. In contrast, there is little evidence for the representation of social interactions influencing initial search performance
Bound Together: Social binding leads to faster processing, spatial distortion and enhanced memory of interacting partners.
The binding of features into perceptual wholes is a well-established phenomenon, which has previously only been studied in the context of early vision and low-level features, such as colour or proximity. We hypothesised that a similar binding process, based on higher level information, could bind people into interacting groups, facilitating faster processing and enhanced memory of social situations. To investigate this possibility we used three experimental approaches to explore grouping effects in displays involving interacting people. First, using a visual search task we demonstrate more rapid processing for interacting (versus non-interacting) pairs in an odd-quadrant paradigm (Experiments 1a & 1b). Second, using a spatial judgment task, we show that interacting individuals are remembered as physically closer than are non-interacting individuals (Experiments 2a & 2b). Finally, we show that memory retention of group- relevant and irrelevant features is enhanced when recalling interacting partners in a surprise memory task (Experiments 3a & 3b). Each of these results is consistent with the social binding hypothesis, and alternative explanations based on low level perceptual features and attentional effects are ruled out. We conclude that automatic mid-level grouping processes bind individuals into groups on the basis of their perceived interaction. Such social binding could provide the basis for more sophisticated social processing. Identifying the automatic encoding of social interactions in visual search, distortions of spatial working memory, and facilitated retrieval of object properties from longer-term memory, opens new approaches to studying social cognition with possible practical applications