176 research outputs found

    Interactive Visualization of Graph Pyramids

    Get PDF
    Hierarchies of plane graphs, called graph pyramids, can be used for collecting, storing and analyzing geographical information based on satellite images or other input data. The visualization of graph pyramids facilitates studies about their structure, such as their vertex distribution or height in relation of a specific input image. Thus, a researcher can debug algorithms and ask for statistical information. Furthermore, it improves the better understanding of geographical data, like landscape properties or thematical maps. In this paper, we present an interactive 3D visualization tool that supports several coordinated views on graph pyramids, subpyramids, level graphs, thematical maps, etc. Additionally, some implementation details and application results are discussed

    Text visualization techniques: Taxonomy, visual survey, and community insights

    Full text link
    Figure 1: The web-based user interface of our visual survey called Text Visualization Browser. By using the interaction panel on the left hand side, researchers can look for specific visualization techniques and filter out entries with respect to a set of categories (cf. the taxonomy given in Sect. 3). Details for a selected entry are shown by clicking on a thumbnail image in the main view. The survey contains 141 categorized visualization techniques by January 19, 2015. Text visualization has become a growing and increasingly impor-tant subfield of information visualization. Thus, it is getting harder for researchers to look for related work with specific tasks or vi-sual metaphors in mind. In this paper, we present an interactive visual survey of text visualization techniques that can be used for the purposes of search for related work, introduction to the subfield and gaining insight into research trends. We describe the taxonomy used for categorization of text visualization techniques and com-pare it to approaches employed in several other surveys. Finally, we present results of analyses performed on the entries data

    Learning by generation in computer science education

    Get PDF
    The use of generic and generative methods for the development and application of interactive educational software is a relatively unexplored area in industry and education. Advantages of generic and generative techniques are, among other things, the high degree of reusability of systems parts and the reduction of development costs. Furthermore, generative methods can be used for the development or realization of novel learning models. In this paper, we discuss such a learning model that propagates a new way of explorative learning in computer science education with the help of generators. A realization of this model represents the educational software GANIFA on the theory of generating finite automata from regular expressions. In addition to the educational system's description, we present an evaluation of this system.Facultad de Informátic

    User Preferences of Spatio-Temporal Referencing Approaches For Immersive 3D Radar Charts

    Full text link
    The use of head-mounted display technologies for virtual reality experiences is inherently single-user-centred, allowing for the visual immersion of its user in the computer-generated environment. This isolates them from their physical surroundings, effectively preventing external visual information cues, such as the pointing and referral to an artifact by another user. However, such input is important and desired in collaborative scenarios when exploring and analyzing data in virtual environments together with a peer. In this article, we investigate different designs for making spatio-temporal references, i.e., visually highlighting virtual data artifacts, within the context of Collaborative Immersive Analytics. The ability to make references to data is foundational for collaboration, affecting aspects such as awareness, attention, and common ground. Based on three design options, we implemented a variety of approaches to make spatial and temporal references in an immersive virtual reality environment that featured abstract visualization of spatio-temporal data as 3D Radar Charts. We conducted a user study (n=12) to empirically evaluate aspects such as aesthetic appeal, legibility, and general user preference. The results indicate a unified favour for the presented location approach as a spatial reference while revealing trends towards a preference of mixed temporal reference approaches dependent on the task configuration: pointer for elementary, and outline for synoptic references. Based on immersive data visualization complexity as well as task reference configuration, we argue that it can be beneficial to explore multiple reference approaches as collaborative information cues, as opposed to following a rather uniform user interface design.Comment: 29 pages, 9 figures, 1 tabl

    Designing a 3D Gestural Interface to Support User Interaction with Time-Oriented Data as Immersive 3D Radar Chart

    Full text link
    The design of intuitive three-dimensional user interfaces is vital for interaction in virtual reality, allowing to effectively close the loop between a human user and the virtual environment. The utilization of 3D gestural input allows for useful hand interaction with virtual content by directly grasping visible objects, or through invisible gestural commands that are associated with corresponding features in the immersive 3D space. The design of such interfaces remains complex and challenging. In this article, we present a design approach for a three-dimensional user interface using 3D gestural input with the aim to facilitate user interaction within the context of Immersive Analytics. Based on a scenario of exploring time-oriented data in immersive virtual reality using 3D Radar Charts, we implemented a rich set of features that is closely aligned with relevant 3D interaction techniques, data analysis tasks, and aspects of hand posture comfort. We conducted an empirical evaluation (n=12), featuring a series of representative tasks to evaluate the developed user interface design prototype. The results, based on questionnaires, observations, and interviews, indicate good usability and an engaging user experience. We are able to reflect on the implemented hand-based grasping and gestural command techniques, identifying aspects for improvement in regard to hand detection and precision as well as emphasizing a prototype's ability to infer user intent for better prevention of unintentional gestures.Comment: 30 pages, 6 figures, 2 table

    10241 Abstracts Collection -- Information Visualization

    Get PDF
    From 13.06.10 to 18.06.10, the Dagstuhl Seminar 10241 ``Information Visualization \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    10241 Executive Summary -- Information Visualization

    Get PDF
    Information Visualization (InfoVis) focuses on the use of visualization techniques to help people understand and analyze data. While related fields such as Scientific Visualization involve the presentation of data that has some physical or geometric correspondence, Information Visualization centers on abstract information without such correspondences. The aim of this seminar was to bring together theoreticians and practitioners from the field with a special focus on the intersection of InfoVis and Human-Computer Interaction. To support discussions that are related to the visualization of real world data, researchers from selected application areas also attended and contributed. During the seminar, working groups on eight different topics were formed and enabled a critical reflection on ongoing research efforts, the state of the field, and key research challenges today

    VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees

    Full text link
    Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees. Due to the inherent ensemble characteristic of these methods, they typically outperform single decision trees or other ML models in predictive performance. However, numerous decision paths are generated for each decision tree, increasing the overall complexity of the model and hindering its use in domains that require trustworthy and explainable decisions, such as finance, social care, and health care. Thus, the interpretability of bagging and boosting algorithms, such as random forest and adaptive boosting, reduces as the number of decisions rises. In this paper, we propose a visual analytics tool that aims to assist users in extracting decisions from such ML models via a thorough visual inspection workflow that includes selecting a set of robust and diverse models (originating from different ensemble learning algorithms), choosing important features according to their global contribution, and deciding which decisions are essential for global explanation (or locally, for specific cases). The outcome is a final decision based on the class agreement of several models and the explored manual decisions exported by users. We evaluated the applicability and effectiveness of VisRuler via a use case, a usage scenario, and a user study. The evaluation revealed that most users managed to successfully use our system to explore decision rules visually, performing the proposed tasks and answering the given questions in a satisfying way.Comment: This manuscript is currently under revie

    Controlling In-Vehicle Systems with a Commercial EEG Headset: Performance and Cognitive Load

    Get PDF
    Humans have dreamed for centuries to control their surroundings solely by the power of their minds. These aspirations have been captured by multiple science fiction creations, such as the Neuromancer novel by William Gibson or the Brainstorm cinematic movie, to name just a few. Nowadays, these dreams are slowly becoming reality due to a variety of brain-computer interfaces (BCI) that detect neural activation patterns and support the control of devices by brain signals. An important field in which BCIs are being successfully integrated is the interaction with vehicular systems. In this paper, we evaluate the performance of BCIs, more specifically a commercial electroencephalographic (EEG) headset in combination with vehicle dashboard systems, and highlight the advantages and limitations of this approach. Further, we investigate the cognitive load that drivers experience when interacting with secondary in-vehicle devices via touch controls or a BCI headset. As in-vehicle systems are increasingly versatile and complex, it becomes vital to capture the level of distraction and errors that controlling these secondary systems might introduce to the primary driving process. Our results suggest that the control with the EEG headset introduces less distraction to the driver, probably as it allows the eyes of the driver to remain focused on the road. Still, the control of the vehicle dashboard by EEG is efficient only for a limited number of functions, after which increasing the number of in-vehicle controls amplifies the detection of false commands
    • …
    corecore