41 research outputs found
Collaborating in spatial tasks: Partners adapt the perspective of their descriptions, coordination strategies, and memory representations.
The partnerâs viewpoint influences spatial descriptions and, when
strongly emphasized, spatial memories as well. We examined whether partnerspecific
information affects the representations people spontaneously construct,
the description strategies they spontaneously select, and the representations
their collaborating partner constructs based on these descriptions. Directors
described to a misaligned Matcher arrays learned while either knowing the
Matcherâs viewpoint or not. Knowing the Matcherâs viewpoint led to distinctive
processing in spatial judgments and a rotational bias in array drawings.
Directorsâ descriptions reflected strategic choices, suggesting that partners
considered each otherâs computational demands. Such strategies were effective
as reflected by the number of conversational turns partners took to coordinate.
Matchers represented both partnersâ viewpoints in memory, with the Directorsâ
descriptions predicting the facilitated perspective. Thus, partners behave
contingently in spatial tasks to optimize their coordination: the availability of
the partnerâs viewpoint influences oneâs memory and description strategies,
which in turn influence the partnerâs memory
Attenuating information in spoken communication: For the speaker, or for the addressee?
Speakers tend to attenuate information that is predictable or repeated. To what extent is
this done automatically and egocentrically, because it is easiest for speakers themselves,
and to what extent is it driven by the informational needs of addressees? In 20 triads of
naive subjects, speakers told the same Road Runner cartoon story twice to one addressee
and once to another addressee, counterbalanced for order (Addressee1/Addressee1/
Addressee2 or Addressee1/Addressee2/Addressee1). Stories retold to the same (old)
addressees were attenuated compared to those retold to new addressees; this was true
for events mentioned, number of words, and amount of detail. Moreover, lexically identical
expressions by the same speaker were more intelligible to another group of listeners when
the expressions had been addressed to new addressees than when they had been
addressed to old addressees. We conclude that speakersâ attenuating of information in
spontaneous discourse is driven at least in part by addressees. Such audience design is computationally
feasible when it can be guided by a ââone-bitâ model (my audience has heard
this before, or not)
Social and representational cues jointly influence spatial perspective-taking.
We examined how social cues (the conversational partnerâs viewpoint) and representational
ones (the intrinsic structure of a spatial layout) jointly shape peopleâs spatial memory representations
and their subsequent descriptions. In 24 pairs, Directors studied an array with a symmetrical
structure while either knowing their Matcherâs subsequent viewpoint or not. During the subsequent
description of the array, the arrayâs intrinsic structure was aligned with the Director, the Matcher,
or neither partner. According to memory tests preceding descriptions, Directors who had studied
the array while aligned with its structure were more likely to use its orientation as an organizing
direction. Directors who had studied the array while misaligned with its structure used its orientation
more frequently as an organizing orientation when knowing that the Matcher would be
aligned with it, but used their own viewpoint more frequently as an organizing direction when not
knowing the Matcherâs viewpoint. Directors also adapted their descriptions strategically, using
more egocentric expressions when aligned with the intrinsic structure and more partner-centered
expressions when their Matchers were the ones aligned with the structure, even when this information
wasnât available in advance. These findings suggest that speakers are guided by converging
social and representational cues to adapt flexibly the organization of their memories and the
perspectives of their descriptions
Speakers adapt gestures to addressees' knowledge: Implications for models of co-speech gesture.
Are gesturing and speaking shaped by similar communicative constraints? In an experiment, we teased apart
communicative from cognitive constraints upon multiple dimensions of speech-accompanying gestures in spontaneous
dialogue. Typically, speakers attenuate old, repeated or predictable information but not new information. Our
study distinguished what was new or old for speakers from what was new or old for (and shared with) addressees. In
20 groups of 3 naive participants, speakers retold the same Road Runner cartoon story twice to one addressee and
once to another. We compared the distribution of gesture types, and the gesturesâ size and iconic precision across
retellings. Speakers gestured less frequently in stories retold to Old Addressees than New Addressees. Moreover, the
gestures they produced in stories retold to Old Addressees were smaller and less precise than those retold to New
Addressees, although these were attenuated over time as well. Consistent with our previous findings about speaking,
gesturing is guided by both speaker-based (cognitive) and addressee-based (communicative) constraints that affect
both planning and motoric execution. We discuss the implications for models of co-speech gesture production
What's so difficult with adopting imagined perspectives?
Research on spatial cognition suggests that
transformation processes and/or spatial conflicts may
influence performance on mental perspective-taking tasks.
However, conflicting findings have complicated our
understanding about the processes involved in perspectivetaking,
particularly those giving rise to angular disparity
effects, whereby performance worsens as the imagined
perspective adopted deviates from oneâs actual perspective.
Based on data from experiments involving mental perspective-
taking in immediate and remote spatial layouts,
we propose here a novel account for the difficulty with
perspective-taking. According to this account, the main
difficulty lies in maintaining an imagined perspective in
working memory, especially in the presence of salient
sensorimotor information
The protagonist's first perspective influences the encoding of spatial information in narratives
Three experiments examined the first-perspective alignment effect that is observed when retrieving spatial
information from memory about described environments. Participants read narratives that described the
viewpoint of a protagonist in fictitious environments and then pointed to the memorized locations of
described objects from imagined perspectives. Results from Experiments 1 and 2 showed that performance
was best when participants responded from the protagonistâs first perspective even though object
locations were described from a different perspective. In Experiment 3, in which participants were physically
oriented with the perspective used to describe object locations, performance from that description
perspective was better than that from the protagonistâs first perspective, which was, in turn, better than
performance from other perspectives. These findings suggest that when reading narratives, people
default to using a reference frame that is aligned with their own facing direction, although physical
movement may facilitate retrieval from other perspectives
Integration of spatial relations across perceptual experiences.
People often carry out tasks that entail coordinating spatial
information encoded in temporally and/or spatially distinct perceptual
experiences. Much research has been conducted to determine whether such
spatial information is integrated into a single spatial representation or whether it
is kept in separate representations that can be related at the time of retrieval.
Here, we review the existing literature on the integration of spatial information
and present results from a new experiment aimed at examining whether
locations encoded from different perspectives in the same physical
environments are integrated into a single spatial representation. Overall, our
findings, coupled with those from other studies, suggest that separate spatial
representations are maintained in memory
When gestures show us the way: Co-speech gestures selectively facilitate navigation and spatial memory.
How does gesturing during route learning relate to subsequent
spatial performance? We examined the relationship
between gestures produced spontaneously while studying
route directions and spatial representations of the navigated
environment. Participants studied route directions, then navigated
those routes from memory in a virtual environment, and
finally had their memory of the environment assessed. We
found that, for navigators with low spatial perspective-taking
performance on the Spatial Orientation Test, more gesturing
from a survey perspective predicted more accurate memory
following navigation. Thus, co-thought gestures accompanying
route learning relate to performance selectively, depending on
the gesturersâ spatial ability and the perspective of their gestures.
Survey gestures may help some individuals visualize an
overall route that they can retain in memory
The conversational partner's perspective affects spatial memory and descriptions.
We examined whether people spontaneously represent the partnerâs viewpoint in spatial
memory when it is available in advance and whether they adapt their spontaneous
descriptions accordingly. In 18 pairs, Directors studied arrays of objects while: (1) not
knowing about having to describe the array to a Matcher, (2) knowing about the subsequent
description, and (3) knowing the Matcherâs subsequent viewpoint, which was offset
by 90, 135, or 180. In memory tests preceding descriptions, Directors represented the
Matcherâs viewpoint when it was known during study, taking longer to imagine orienting
to perspectives aligned with it and rotating their drawings of arrays toward it. Conversely,
when Directors didnât know their Matcherâs viewpoint, they encoded arrays egocentrically,
being faster to imagine orienting to and to respond from perspectives aligned with their
own. Directors adapted their descriptions flexibly, using partner-centered spatial expressions
more frequently when misaligned by 90 and egocentric ones when misaligned by
135. Knowing their misalignment in advance helped partners recognize when descriptions
would be most difficult for Directors (at 135) and to mutually agree on using their
perspective. Thus, in collaborative tasks, people donât rely exclusively on their spatial
memory but also use other pertinent perceptual information (e.g., their misalignment from
their partner) to assess the computational demands on each partner and select strategies
that maximize the efficiency of communication
An Evaluation of Current Methodological Approaches for Game-Based Health Interventions
In this repository, we include the supplementary materials from a review in which we evaluate the extent to which recent health-based game intervention studies have improved from past criticisms around the game development process, its theoretical grounding, and its implementation in terms of research design. 26 published articles were reviewed for their theoretical grounding, their use of game mechanics, and their methodologies for the development and implementation of game-based interventions. Supplementary materials contain our coding documents, reliability coding, and the data collected from the articles for our analysis. All studies in this sample included more than one game mechanic, most studies grounded their interventions in psychological theory, and studies frequently used quantitative methods to determine intervention impact. In line with recommendations, the majority of studies used large sample sizes and applied their interventions in real-world settings. Despite these improvements, we identified areas of growth: future studies still need to utilize interdisciplinary teams, user-centered and iterative approaches, and standardize their reporting of intervention design components. We hope this review helps to inform the future of applied game design in health contexts