8 research outputs found

    Stepping Into a Map: Initial Heading Direction Influences Spatial Memory Flexibility

    Get PDF
    Learning a novel environment involves integrating first-person perceptual and motoric experiences with developing knowledge about the overall structure of the surroundings. The present experiments provide insights into the parallel development of these egocentric and allocentric memories by intentionally conflicting body- and world-centered frames of reference during learning, and measuring outcomes via online and offline measures. Results of two experiments demonstrate faster learning and increased memory flexibility following route perspective reading (Experiment 1) and virtual navigation (Experiment 2) when participants begin exploring the environment on a northward (vs. any other direction) allocentric heading. We suggest that learning advantages due to aligning body-centered (left/right/forward/back) with world-centered (NSEW) reference frames are indicative of three features of spatial memory development and representation. First, memories for egocentric and allocentric information develop in parallel during novel environment learning. Second, cognitive maps have a preferred orientation relative to world-centered coordinates. Finally, this preferred orientation corresponds to traditional orientation of physical maps (i.e., north is upward), suggesting strong associations between daily perceptual and motor experiences and the manner in which we preferentially represent spatial knowledge

    Get in my belly: food preferences trigger approach and avoidant postural asymmetries.

    Get PDF
    Appetitive motivational states are fundamental neural and behavioral mechanisms underlying healthy and abnormal eating behavior, though their dynamic influence on food-related behavior is unknown. The present study examined whether personal food-related preferences would activate approach and avoidance systems, modulating spontaneous postural sway toward and away from food items. Participants stood on a balance board that collected real-time data regarding postural sway along two axes (x, y) while they viewed a series of images depicting food items varying in nutritional value and individual preferences. Overall, participants showed reliable postural sway toward highly preferred and away from highly non-preferred items. This effect became more pronounced over time; sway along the mediolateral axis showed no reliable variation by preference. Results carry implications for two-factor (homeostatic versus hedonic) neurobehavioral theories of hunger and appetitive motivation, and carry applied clinical implications for the measurement and management of abnormal eating behavior

    Anterior (forward) versus posterior (backward) postural sway as a function of food item preferences and time relative to image onset. Fixation (referenced to 500ms post-onset baseline) provided for visual comparison.

    No full text
    <p>Anterior (forward) versus posterior (backward) postural sway as a function of food item preferences and time relative to image onset. Fixation (referenced to 500ms post-onset baseline) provided for visual comparison.</p

    Characterizing information access needs in gaze-adaptive augmented reality interfaces: implications for fast-paced and dynamic usage contexts

    No full text
    Gaze-adaptive interfaces can enable intuitive hands-free augmented reality (AR) interaction but unintentional selection (i.e. “Midas Touch”) can have serious consequences during high-stakes real-world AR use. In the present study, we assessed how simulated gaze-adaptive AR interfaces, implementing single and dual gaze inputs, influence Soldiers’ human performance and user experience (UX) in a fast-paced virtual reality marksmanship task. In Experiment 1, we investigated 1- and 2-stage dwell-based interfaces, finding confirmatory dual gaze dwell input effectively reduced Midas Touch but also reduced task performance and UX compared to an always-on (AO) interface. In Experiment 2, we investigated gaze depth-based interfaces, finding similar negative impacts of confirmatory dwell on Midas Touch, task performance, and UX. Overall, compared to the AO interface, single gaze input interfaces (e.g. single dwell or gaze depth threshold) reduced viewing of task-irrelevant information and yielded similar task performance and UX despite being prone to Midas Touch. Broadly, our findings demonstrate that AR users performing fast-paced dynamic tasks can tolerate some unintentional activation of AR displays if reliable and rapid information access is maintained and point to the need to develop and refine gaze depth estimation algorithms and novel gaze depth-based interfaces that provide rapid access to AR display content.</p
    corecore