188 research outputs found

    Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology

    Get PDF
    This research was supported by the Royal Society International Exchange Programme (grant no. IE120643).Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.Publisher PDFPeer reviewe

    Using Gabmap

    Get PDF
    AbstractGabmap is a freely available, open-source web application that analyzes the data of language variation, e.g. varying words for the same concepts, varying pronunciations for the same words, or varying frequencies of syntactic constructions in transcribed conversations. Gabmap is an integrated part of CLARIN (see e.g. http://portal.clarin.nl). This article summarizes Gabmap's basic functionality, adding material on some new features and reporting on the range of uses to which Gabmap has been put. Gabmap is modestly successful, and its popularity underscores the fact that the study of language variation has crossed a watershed concerning the acceptability of automated language analysis. Automated analysis not only improves researchers’ efficiency, it also improves the replicability of their analyses and allows them to focus on inferences to be drawn from analyses and other more abstract aspects of that study

    GPGPU computation and visualization of three-dimensional cellular automata

    Get PDF
    This paper presents a general-purpose simulation approach integrating a set of technological developments and algorithmic methods in cellular automata (CA) domain. The approach provides a general-purpose computing on graphics processor units (GPGPU) implementation for computing and multiple rendering of any direct-neighbor three-dimensional (3D) CA. The major contributions of this paper are: the CA processing and the visualization of large 3D matrices computed in real time; the proposal of an original method to encode and transmit large CA functions to the graphics processor units in real time; and clarification of the notion of top-down and bottom-up approaches to CA that non-CA experts often confuse. Additionally a practical technique to simplify the finding of CA functions is implemented using a 3D symmetric configuration on an interactive user interface with simultaneous inside and surface visualizations. The interactive user interface allows for testing the system with different project ideas and serves as a test bed for performance evaluation. To illustrate the flexibility of the proposed method, visual outputs from diverse areas are demonstrated. Computational performance data are also provided to demonstrate the method's efficiency. Results indicate that when large matrices are processed, computations using GPU are two to three hundred times faster than the identical algorithms using CP

    Finding Characteristic Features in Stylometric Analysis

    Get PDF
    The usual focus in authorship studies is on authorship attribution, i.e. determining which author (of a given set) wrote a piece of unknown provenance. The usual setting involves a small number of candidate authors, which means that the focus quickly revolves around a search for features that discriminate among the candidates. Whether the features that serve to discriminate among the authors are characteristic is then not of primary importance. We respectfully suggest an alternative in this article, namely a focus on seeking features that are characteristic for an author with respect to others. To determine an author's characteristic features, we first seek elements that he or she uses consistently, which we therefore regard as 'representative', but we likewise seek elements which the author uses 'distinctively' in comparison to an opposing author. We test the idea on a task recently proposed that compares Charles Dickens to both Wilkie Collins and a larger reference set comprising several authors' works from the 18th and 19th century. We then compare the use of representative and distinctive features to Burrows' 'Delta' and Hoovers' 'CoV Tuning'; we find that our method bears little similarity with either method in terms of characteristic feature selection. We show that our method achieves reliable and consistent results in the twoauthor comparison and fair results in the multi-author one, measured by separation ability in clustering.</p

    Foveation for 3D visualization and stereo imaging

    Get PDF
    Even though computer vision and digital photogrammetry share a number of goals, techniques, and methods, the potential for cooperation between these fields is not fully exploited. In attempt to help bridging the two, this work brings a well-known computer vision and image processing technique called foveation and introduces it to photogrammetry, creating a hybrid application. The results may be beneficial for both fields, plus the general stereo imaging community, and virtual reality applications. Foveation is a biologically motivated image compression method that is often used for transmitting videos and images over networks. It is possible to view foveation as an area of interest management method as well as a compression technique. While the most common foveation applications are in 2D there are a number of binocular approaches as well. For this research, the current state of the art in the literature on level of detail, human visual system, stereoscopic perception, stereoscopic displays, 2D and 3D foveation, and digital photogrammetry were reviewed. After the review, a stereo-foveation model was constructed and an implementation was realized to demonstrate a proof of concept. The conceptual approach is treated as generic, while the implementation was conducted under certain limitations, which are documented in the relevant context. A stand-alone program called Foveaglyph is created in the implementation process. Foveaglyph takes a stereo pair as input and uses an image matching algorithm to find the parallax values. It then calculates the 3D coordinates for each pixel from the geometric relationships between the object and the camera configuration or via a parallax function. Once 3D coordinates are obtained, a 3D image pyramid is created. Then, using a distance dependent level of detail function, spherical volume rings with varying resolutions throughout the 3D space are created. The user determines the area of interest. The result of the application is a user controlled, highly compressed non-uniform 3D anaglyph image. 2D foveation is also provided as an option. This type of development in a photogrammetric visualization unit is beneficial for system performance. The research is particularly relevant for large displays and head mounted displays. Although, the implementation, because it is done for a single user, would possibly be best suited to a head mounted display (HMD) application. The resulting stereo-foveated image can be loaded moderately faster than the uniform original. Therefore, the program can potentially be adapted to an active vision system and manage the scene as the user glances around, given that an eye tracker determines where exactly the eyes accommodate. This exploration may also be extended to robotics and other robot vision applications. Additionally, it can also be used for attention management and the viewer can be directed to the object(s) of interest the demonstrator would like to present (e.g. in 3D cinema). Based on the literature, we also believe this approach should help resolve several problems associated with stereoscopic displays such as the accommodation convergence problem and diplopia. While the available literature provides some empirical evidence to support the usability and benefits of stereo foveation, further tests are needed. User surveys related to the human factors in using stereo foveated images, such as its possible contribution to prevent user discomfort and virtual simulator sickness (VSS) in virtual environments, are left as future work.reviewe

    Virtual geographic environments in socio-environmental modeling: a fancy distraction or a key to communication?

    Get PDF
    Modeling and simulation are recognized as effective tools for management and decision support across various disciplines; however, poor communication of results to the end users is a major obstacle for properly using and understanding model output. Visualizations can play an essential role in making modeling results accessible for management and decision-making. Virtual reality (VR) and virtual geographic environments (VGEs) are popular and potentially very rewarding ways to visualize socio-environmental models. However, there is a fundamental conflict between abstraction and realism: models are goal-driven, and created to simplify reality and to focus on certain crucial aspects of the system; VR, in the meanwhile, by definition, attempts to replicate reality as closely as possible. This elevated realism may add to the complexity curse in modeling, and the message might be diluted by too many (background) details. This is also connected to information overload and cognitive load. Moreover, modeling is always associated with the treatment of uncertainty–something difficult to present in VR. In this paper, we examine the use of VR and, specifically, VGEs in socio-environmental modeling, and discuss how VGEs and simulation modeling can be married in a mutually beneficial way that makes VGEs more effective for users, while enhancing simulation models

    What do complexity measures measure? Correlating and validating corpus-based measures of morphological complexity

    Get PDF
    Article describes how the authors present an analysis of eight measures used for quantifying morphological complexity of natural languages. The measures they study are corpus-based measures of morphological complexity with varying requirements for corpus annotation

    EVALUATING ROUTE LEARNING PERFORMANCE OF OLDER AND YOUNGER ADULTS IN DIFFERENTLY-DESIGNED VIRTUAL ENVIRONMENTS: A TASK-DIFFERENTIAL ANALYSIS

    Get PDF
    Navigating in unfamiliar environments is a complex task that requires considerable cognitive resources to memorize (and eventually learn) a route. In general, virtual environments (VEs) can be useful tools in training for route learning and improving route recall. However, the visual information presented in VEs, that is, what we choose to present in a virtual scene, can strongly affect the ability to recall a route. This is especially relevant when we consider individual differences, and people’s varying abilities to navigate effectively. Taking various cognitive processes involved in route learning into account, we designed a multi-level experiment that examines route recall effectiveness in a navigation context. We conceptualized that the participants would have to recall information related to the route that is demanding on primarily visual, spatial, or visuospatial memory systems. Furthermore, because there is a clear link between memory capacity and ageing; we conducted our experiment with two different age groups (total 81 participants: 42 young people aged 20&ndash;30 yo and 39 older people aged 65&ndash;76 yo). We also measured participants’ spatial abilities and visuospatial memory capacity for control purposes. After experiencing a pre-determined route in three different VEs (that we varied in levels of visual realism, and named as AbstractVE, MixedVE, and RealisticVE), each participant solved a list of tasks that was designed to measure visual, spatial, and visuospatial recall of the scene elements and information about the route. Participants solved these tasks immediately after experiencing the route in each VE, as well as after a week, thus we could measure ‘learning’ (delayed recall). Results from our study confirm the well-known decline in recall with age (young vs. older), provide new information regarding memorability of routes and VE scene elements over time (immediate vs. delayed), and most importantly demonstrate the crucial role the visual design decisions play in route learning and memorability of visuospatial displays
    corecore