4 research outputs found

    EXPLORING THE IMPACT OF VISUAL COMPLEXITY LEVELS IN 3D CITY MODELS ON THE ACCURACY OF INDIVIDUALS’ ORIENTATION AND COGNITIVE MAPS

    Get PDF
    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants’ orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they ‘travelled’ in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant

    Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology

    Get PDF
    This research was supported by the Royal Society International Exchange Programme (grant no. IE120643).Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.Publisher PDFPeer reviewe

    Space-variant image coding for stereoscopic media

    Full text link
    This paper presents a brief overview of space variant image coding for stereoscopic media and reports on findings from a study using foveation for stereoscopic imaging. Foveation is a perceptually motivated approach to image coding based on the structure of human fovea and it is well studied in image and video processing domains. However it is less exploited for three-dimensional (3D) space even though it is potentially well suited also for 3D, e.g. for level of detail management in gaze contingent stereoscopic displays. In this paper results from a stereoscopic foveation implementation is presented in support of this argument. A brief discussion on computational as well as human factors for successful management and presentation of stereoscopic media is also provided based on current literature
    corecore