1,808 research outputs found

    From cognitive maps to spatial schemas

    Get PDF
    A schema refers to a structured body of prior knowledge that captures common patterns across related experiences. Schemas have been studied separately in the realms of episodic memory and spatial navigation across different species and have been grounded in theories of memory consolidation, but there has been little attempt to integrate our understanding across domains, particularly in humans. We propose that experiences during navigation with many similarly structured environments give rise to the formation of spatial schemas (for example, the expected layout of modern cities) that share properties with but are distinct from cognitive maps (for example, the memory of a modern city) and event schemas (such as expected events in a modern city) at both cognitive and neural levels. We describe earlier theoretical frameworks and empirical findings relevant to spatial schemas, along with more targeted investigations of spatial schemas in human and non-human animals. Consideration of architecture and urban analytics, including the influence of scale and regionalization, on different properties of spatial schemas may provide a powerful approach to advance our understanding of spatial schemas

    Digital sketch maps and eye tracking statistics as instruments to obtain insights into spatial cognition

    Get PDF
    This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was assessed based on the order with which the objects were drawn and the influence of visual variables (e.g. presence & location, size, shape, color). On the other hand, trial durations and eye tracking statistics such as average duration of fixations, and number of fixations per seconds were compared. Moreover, selected AoIs (Area of Interests) were explored to gain a deeper insight on visual behavior of map users. Depending on the normality of the data, we used either two-way ANOVA or Mann-Whitney U test to inspect the significance of the results. Based on the evaluation of the drawing order, we observed that experts and males drew roads first whereas; novices and females focused more on hydrographic object. According to the assessment of drawn elements, no significant differences emerged between neither experts and novices, nor females and males for the retrieval of spatial information presented on 2D maps with a simple design and content. The differences in trial durations between novices and experts were not statistically significant while both studying and drawing. Similarly, no significant difference occurred between female and male participants for either studying or drawing. Eye tracking metrics also supported these findings. For average duration of fixation, there was found no significant difference between experts and novices, as well as between females and males. Similarly, no significant differences were found for the mean number of fixation

    DYNAMICS OF COLLABORATIVE NAVIGATION AND APPLYING DATA DRIVEN METHODS TO IMPROVE PEDESTRIAN NAVIGATION INSTRUCTIONS AT DECISION POINTS FOR PEOPLE OF VARYING SPATIAL APTITUDES

    Get PDF
    Cognitive Geography seeks to understand individual decision-making variations based on fundamental cognitive differences between people of varying spatial aptitudes. Understanding fundamental behavioral discrepancies among individuals is an important step to improve navigation algorithms and the overall travel experience. Contemporary navigation aids, although helpful in providing turn-by-turn directions, lack important capabilities to distinguish decision points for their features and importance. Existing systems lack the ability to generate landmark or decision point based instructions using real-time or crowd sourced data. Systems cannot customize personalized instructions for individuals based on inherent spatial ability, travel history, or situations. This dissertation presents a novel experimental setup to examine simultaneous wayfinding behavior for people of varying spatial abilities. This study reveals discrepancies in the information processing, landmark preference and spatial information communication among groups possessing differing abilities. Empirical data is used to validate computational salience techniques that endeavor to predict the difficulty of decision point use from the structure of the routes. Outlink score and outflux score, two meta-algorithms that derive secondary scores from existing metrics of network analysis, are explored. These two algorithms approximate human cognitive variation in navigation by analyzing neighboring and directional effect properties of decision point nodes within a routing network. The results are validated by a human wayfinding experiment, results show that these metrics generally improve the prediction of errors. In addition, a model of personalized weighting for users\u27 characteristics is derived from a SVMrank machine learning method. Such a system can effectively rank decision point difficulty based on user behavior and derive weighted models for navigators that reflect their individual tendencies. The weights reflect certain characteristics of groups. Such models can serve as personal travel profiles, and potentially be used to complement sense-of-direction surveys in classifying wayfinders. A prototype with augmented instructions for pedestrian navigation is created and tested, with particular focus on investigating how augmented instructions at particular decision points affect spatial learning. The results demonstrate that survey knowledge acquisition is improved for people with low spatial ability while decreased for people of high spatial ability. Finally, contributions are summarized, conclusions are provided, and future implications are discussed

    Investigating Spatial Memory and Navigation in Developmental Amnesia: Evidence from a Google Street View Paradigm, Mental Navigation Tasks, and Route Descriptions

    Get PDF
    This dissertation examined the integrity of spatial representations of extensively travelled environments in developmental amnesia, thereby elucidating the role of the hippocampus in forming and retrieving spatial memories that enable flexible navigation. Previous research using mental navigation tasks found that developmental amnesic case H.C., an individual with atypical hippocampal development, could accurately estimate distance and direction between landmarks, but her representation of her environment was fragmented, inflexible, and lacked detail (Rosenbaum, Cassidy, & Herdman, 2015). Study 1 of this dissertation examined H.C.s spatial memory of her home environment using an ecologically valid virtual reality paradigm based on Google Street View. H.C. and control participants virtually navigated routes of varying familiarity within their home environment. To examine whether flexible navigation requires the hippocampus, participants also navigated familiar routes that had been mirror-reversed. H.C. performed similarly to control participants on all route conditions, suggesting that spatial learning of frequently travelled environments can occur despite compromised hippocampal system function. H.C.s unexpected ability to successfully navigate mirror-reversed routes might reflect the accumulation of spatial knowledge of her environment over the 6 years since she was first tested with mental navigation tasks. As such, Study 2 investigated how spatial representations of extensively travelled environments change over time in developmental amnesia by re-testing H.C. on mental navigation tasks 8 years later. H.C. continued to draw sketch maps that lacked cohesiveness and detail and had difficulty sequencing landmarks and generating detours on a blocked route task, suggesting that her overall representation of the environment did not improve over 8 years. Study 3 thoroughly examined the integrity of H.C.s detailed representation of the environment using a route description task. H.C. accurately described perceptual features of landmarks along a known route, but provided inaccurate information regarding the spatial relations of landmarks, resulting in a fragmented mental representation of the route. Taken together, these results contribute meaningfully to our current understanding of the integrity of spatial representations of extensively travelled environments in developmental amnesia. Non-spatial factors that could influence performance on navigation and spatial memory tasks are discussed, as is the impact of these results on theories of hippocampal function

    A simplified and novel technique to retrieve color images from hand-drawn sketch by human

    Get PDF
    With the increasing adoption of human-computer interaction, there is a growing trend of extracting the image through hand-drawn sketches by humans to find out correlated objects from the storage unit. A review of the existing system shows the dominant use of sophisticated and complex mechanisms where the focus is more on accuracy and less on system efficiency. Hence, this proposed system introduces a simplified extraction of the related image using an attribution clustering process and a cost-effective training scheme. The proposed method uses K-means clustering and bag-of-attributes to extract essential information from the sketch. The proposed system also introduces a unique indexing scheme that makes the retrieval process faster and results in retrieving the highest-ranked images. Implemented in MATLAB, the study outcome shows the proposed system offers better accuracy and processing time than the existing feature extraction technique

    SceneSketcher-v2: Fine-grained scene-level sketch-based image retrieval using adaptive GCNs

    Get PDF
    Sketch-based image retrieval (SBIR) is a long-standing research topic in computer vision. Existing methods mainly focus on category-level or instance-level image retrieval. This paper investigates the fine-grained scene-level SBIR problem where a free-hand sketch depicting a scene is used to retrieve desired images. This problem is useful yet challenging mainly because of two entangled facts: 1) achieving an effective representation of the input query data and scene-level images is difficult as it requires to model the information across multiple modalities such as object layout, relative size and visual appearances, and 2) there is a great domain gap between the query sketch input and target images. We present SceneSketcher-v2, a Graph Convolutional Network (GCN) based architecture to address these challenges. SceneSketcher-v2 employs a carefully designed graph convolution network to fuse the multi-modality information in the query sketch and target images and uses a triplet training process and end-to-end training manner to alleviate the domain gap. Extensive experiments demonstrate SceneSketcher-v2 outperforms state-of-the-art scene-level SBIR models with a significant margin
    • …
    corecore