54 research outputs found

    Differences between Spatial and Visual Mental Representations

    Get PDF
    This article investigates the relationship between visual mental representations and spatial mental representations in human visuo-spatial processing. By comparing two common theories of visuo-spatial processing – mental model theory and the theory of mental imagery – we identified two open questions: (1) which representations are modality-specific, and (2) what is the role of the two representations in reasoning. Two experiments examining eye movements and preferences for under-specified problems were conducted to investigate these questions. We found that significant spontaneous eye movements along the processed spatial relations occurred only when a visual mental representation is employed, but not with a spatial mental representation. Furthermore, the preferences for the answers of the under-specified problems differed between the two mental representations. The results challenge assumptions made by mental model theory and the theory of mental imagery

    Towards Highly Scalable Runtime Models with History

    Full text link
    Advanced systems such as IoT comprise many heterogeneous, interconnected, and autonomous entities operating in often highly dynamic environments. Due to their large scale and complexity, large volumes of monitoring data are generated and need to be stored, retrieved, and mined in a time- and resource-efficient manner. Architectural self-adaptation automates the control, orchestration, and operation of such systems. This can only be achieved via sophisticated decision-making schemes supported by monitoring data that fully captures the system behavior and its history. Employing model-driven engineering techniques we propose a highly scalable, history-aware approach to store and retrieve monitoring data in form of enriched runtime models. We take advantage of rule-based adaptation where change events in the system trigger adaptation rules. We first present a scheme to incrementally check model queries in the form of temporal logic formulas which represent the conditions of adaptation rules against a runtime model with history. Then we enhance the model to retain only information that is temporally relevant to the queries, therefore reducing the accumulation of information to a required minimum. Finally, we demonstrate the feasibility and scalability of our approach via experiments on a simulated smart healthcare system employing a real-world medical guideline.Comment: 8 pages, 4 figures, 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS2020

    Attentional distribution and spatial language

    Get PDF
    Kluth T, Schultheis H. Attentional distribution and spatial language. In: Freksa C, Nebel B, Hegarty M, Barkowsky T, eds. Spatial Cognition IX. Lecture Notes in Computer Science. Vol 8684. Springer International Publishing; 2014: 76-91.Whether visual spatial attention can be split to several discontinuous locations concurrently is still an open and intensely debated question. We address this question in the domain of spatial language use by comparing two existing and three newly proposed computational models. All models are assessed regarding their ability to account for human acceptability ratings for how well a given spatial term describes the spatial arrangement of two functionally related objects. One of the existing models assumes that taking the functional relations into account involves split attention. All new models incorporate functional relations without assuming split attention. Our simulations suggest that not assuming split attention is more appropriate for taking the functional relations into account than assuming split attention. At the same time, the simulations raise doubt as to whether any of the models appropriately captures the impact of functional relations on spatial language use

    Spatial references with gaze and pointing in shared space of humans and robots

    Get PDF
    Renner P, Pfeiffer T, Wachsmuth I. Spatial references with gaze and pointing in shared space of humans and robots. In: Freksa C, Nebel B, Hegarty M, Barkowsky T, eds. Spatial Cognition IX. Lecture Notes in Computer Science. Vol 8684. Berlin [u.a.]: Springer; 2014: 121-136.For solving tasks cooperatively in close interaction with humans, robots need to have timely updated spatial representations. However, perceptual information about the current position of interaction partners is often late. If robots could anticipate the targets of upcoming manual actions, such as pointing gestures, they would have more time to physically react to human movements and could consider prospective space allocations in their planning. Many findings support a close eye-hand coordination in humans which could be used to predict gestures by observing eye gaze. However, effects vary strongly with the context of the interaction. We collect evidence of eye-hand coordination in a natural route planning scenario in which two agents interact over a map on a table. In particular, we are interested if fixations can predict pointing targets and how target distances affect the interlocutor's pointing behavior. We present an automatic method combining marker tracking and 3D modeling that provides eye and gesture measurements in real-time

    The Role of the Center-of-Mass in Evaluating Spatial Language

    Get PDF
    Kluth T, Burigo M, Schultheis H, Knoeferle P. The Role of the Center-of-Mass in Evaluating Spatial Language. In: Barkowsky T, Falomir Llansola Z, Schultheis H, van de Ven J, eds. KogWis 2016. Space for Cognition. 13th Biannual Conference of the German Cognitive Science Society: Proceedings. Bremen: Universität Bremen; 2016: 11-14

    Motor expertise facilitates the cognitive evaluation of body postures: An ERP study

    Get PDF
    Koester D, Schack T, Güldenpenning I. Motor expertise facilitates the cognitive evaluation of body postures: An ERP study. In: Barkowsky T, Llansola ZF, Schultheis H, van de Ven J, eds. Proceedings of the 13th Biannual Conference of the German Cognitive Science Society. Bremen: Universität Bremen; 2016: 59-62
    • …
    corecore