3,007 research outputs found

    Immersive Insights: A Hybrid Analytics System for Collaborative Exploratory Data Analysis

    Full text link
    In the past few years, augmented reality (AR) and virtual reality (VR) technologies have experienced terrific improvements in both accessibility and hardware capabilities, encouraging the application of these devices across various domains. While researchers have demonstrated the possible advantages of AR and VR for certain data science tasks, it is still unclear how these technologies would perform in the context of exploratory data analysis (EDA) at large. In particular, we believe it is important to better understand which level of immersion EDA would concretely benefit from, and to quantify the contribution of AR and VR with respect to standard analysis workflows. In this work, we leverage a Dataspace reconfigurable hybrid reality environment to study how data scientists might perform EDA in a co-located, collaborative context. Specifically, we propose the design and implementation of Immersive Insights, a hybrid analytics system combining high-resolution displays, table projections, and augmented reality (AR) visualizations of the data. We conducted a two-part user study with twelve data scientists, in which we evaluated how different levels of data immersion affect the EDA process and compared the performance of Immersive Insights with a state-of-the-art, non-immersive data analysis system.Comment: VRST 201

    Interdisciplinary Translation of Comparative Visualization

    Get PDF
    Spatial visualisation skills and interpretations are critical in the design professions, but traditionally difficult to effectively teach. Visualization and multimedia presentation studies show positive improvements in learner outcomes for specific learning domains. But the development and translation of a comparative visualization pedagogy between disciplines is poorly understood. This research seeks to identify an approach to developing comparable multimodal and interactive visualizations and attendant student reflections for curriculum designers in courses that can utilize visualizations and manipulations. Results from previous use of comparative multimodal visualization pedagogy in a multimedia 3D modelling class are used as a guide to translation of pedagogy to architecture design. The focus is how to guide the use of comparative multimodal visualizations through media properties, lesson sequencing, and reflection to inform effective instruction and learning

    The design-by-adaptation approach to universal access: learning from videogame technology

    Get PDF
    This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation

    Collaborative behavior, performance and engagement with visual analytics tasks using mobile devices

    Get PDF
    Interactive visualizations are external tools that can support users’ exploratory activities. Collaboration can bring benefits to the exploration of visual representations or visu‐ alizations. This research investigates the use of co‐located collaborative visualizations in mobile devices, how users working with two different modes of interaction and view (Shared or Non‐Shared) and how being placed at various position arrangements (Corner‐to‐Corner, Face‐to‐Face, and Side‐by‐Side) affect their knowledge acquisition, engagement level, and learning efficiency. A user study is conducted with 60 partici‐ pants divided into 6 groups (2 modes×3 positions) using a tool that we developed to support the exploration of 3D visual structures in a collaborative manner. Our results show that the shared control and view version in the Side‐by‐Side position is the most favorable and can improve task efficiency. In this paper, we present the results and a set of recommendations that are derived from them

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, fĂŒhren zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhĂ€rent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natĂŒrliche Interaktionstechniken als hilfreich fĂŒr die Datenanalyse erwiesen. DarĂŒber hinaus spielt in solchen AnwendungsfĂ€llen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext fĂŒr die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung gefĂŒhrt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion fĂŒr diese oft komplexen Systeme. In meiner Dissertation beschĂ€ftige ich mich mit dieser Herausforderung, indem ich die Interaktion fĂŒr immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von rĂ€umlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann rĂ€umliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen fĂŒr immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. FĂŒr die zweite Frage untersuche ich, wie insbesondere die rĂ€umliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit rĂ€umlichen GerĂ€ten im Vergleich zur Touch-Eingabe, die Verwendung zusĂ€tzlicher mobiler GerĂ€te als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darĂŒber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie rĂ€umliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstĂŒtzen können

    Design Patterns for Situated Visualization in Augmented Reality

    Full text link
    Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.Comment: To appear in IEEE VIS 202

    Primary interoceptive cortex activity during simulated experiences of the body

    Get PDF
    Studies of the classic exteroceptive sensory systems (e.g., vision, touch) consistently demonstrate that vividly imagining a sensory experience of the world – simulating it – is associated with increased activity in the corresponding primary sensory cortex. We hypothesized, analogously, that simulating internal bodily sensations would be associated with increased neural activity in primary interoceptive cortex. An immersive, language-based mental imagery paradigm was used to test this hypothesis (e.g., imagine your heart pounding during a roller coaster ride, your face drenched in sweat during a workout). During two neuroimaging experiments, participants listened to vividly described situations and imagined “being there” in each scenario. In Study 1, we observed significantly heightened activity in primary interoceptive cortex (of dorsal posterior insula) during imagined experiences involving vivid internal sensations. This effect was specific to interoceptive simulation: it was not observed during a separate affect focus condition in Study 1, nor during an independent Study 2 that did not involve detailed simulation of internal sensations (instead involving simulation of other sensory experiences). These findings underscore the large-scale predictive architecture of the brain and reveal that words can be powerful drivers of bodily experiences

    Collaborative geographic visualization

    Get PDF
    Dissertação apresentada na Faculdade de CiĂȘncias e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil GestĂŁo e Sistemas AmbientaisThe present document is a revision of essential references to take into account when developing ubiquitous Geographical Information Systems (GIS) with collaborative visualization purposes. Its chapters focus, respectively, on general principles of GIS, its multimedia components and ubiquitous practices; geo-referenced information visualization and its graphical components of virtual and augmented reality; collaborative environments, its technological requirements, architectural specificities, and models for collective information management; and some final considerations about the future and challenges of collaborative visualization of GIS in ubiquitous environment

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques
    • 

    corecore