7,754 research outputs found

    Viewing a Graph in a Virtual Reality Display is Three Times as Good as a 2D Diagram

    Get PDF
    An experiment is reported which tests whether network information is more effectively displayed in a three dimensional space than in a two dimensional space. The experimental task is to trace a path in a network and the experiment is carried out in 2D, in a 3D stereo view, in a 2D view with head coupled perspective, and in a 3D stereo view with head coupled perspective; this last condition creates a localized virtual reality display. The results show that the motion parallax obtained from the head coupling of perspective is more important than stereopsis in revealing structural information. Overall the results show that three times as much information can be perceived in the head coupled stereo view as in the 2D view

    Is Embodied Interaction Beneficial? A Study on Navigating Network Visualizations

    Full text link
    Network visualizations are commonly used to analyze relationships in various contexts. To efficiently explore a network visualization, the user needs to quickly navigate to different parts of the network and analyze local details. Recent advancements in display and interaction technologies inspire new visions for improved visualization and interaction design. Past research into network design has identified some key benefits to visualizing networks in 3D versus 2D. However, little work has been done to study the impact of varying levels of embodied interaction on network analysis. We present a controlled user study that compared four environments featuring conditions and hardware that leveraged different amounts of embodiment and visual perception ranging from a 2D visualization desktop environment with a standard mouse to a 3D visualization virtual reality environment. We measured the accuracy, speed, perceived workload, and preferences of 20 participants as they completed three network analytic tasks, each of which required unique navigation and substantial effort. For the task that required participants to iterate over the entire visualization rather than focus on a specific area, we found that participants were more accurate using a VR and a trackball mouse than conventional desktop settings. From a workload perspective, VR was generally considered the least mentally demanding and least frustrating in two of our three tasks. It was also preferred and ranked as the most effective and visually appealing condition overall. However, using VR to compare two side-by-side networks was difficult, and it was similar to or slower than other conditions in two of the three tasks. Overall, the accuracy and workload advantages of conditions with greater embodiment in specific tasks suggest promising opportunities to create more effective environments in which to analyze network visualizations.Comment: Accepted by the Information Visualization journa

    Visualizing Object Oriented Software in Three Dimensions

    Get PDF
    There is increasing evidence that it is possible to perceive and understand increasingly comple x information systems if they are displayed a s graphical objects in a three dimensional space . Object-oriented software provides an interestin g test case - there is a natural mapping fro m software objects to visual objects . In this paper we explore two areas. 1) Information perception : we are running controlled experiments to determine empirically if our initial premise is valid; how much more (or less) can be understoo d in 3D than in 2D? 2) Layout: our strategy is to combine partially automatic layout with manua l layout. This paper presents a brief overview of the project, the software architecture and some preliminary empirical results

    Animating the evolution of software

    Get PDF
    The use and development of open source software has increased significantly in the last decade. The high frequency of changes and releases across a distributed environment requires good project management tools in order to control the process adequately. However, even with these tools in place, the nature of the development and the fact that developers will often work on many other projects simultaneously, means that the developers are unlikely to have a clear picture of the current state of the project at any time. Furthermore, the poor documentation associated with many projects has a detrimental effect when encouraging new developers to contribute to the software. A typical version control repository contains a mine of information that is not always obvious and not easy to comprehend in its raw form. However, presenting this historical data in a suitable format by using software visualisation techniques allows the evolution of the software over a number of releases to be shown. This allows the changes that have been made to the software to be identified clearly, thus ensuring that the effect of those changes will also be emphasised. This then enables both managers and developers to gain a more detailed view of the current state of the project. The visualisation of evolving software introduces a number of new issues. This thesis investigates some of these issues in detail, and recommends a number of solutions in order to alleviate the problems that may otherwise arise. The solutions are then demonstrated in the definition of two new visualisations. These use historical data contained within version control repositories to show the evolution of the software at a number of levels of granularity. Additionally, animation is used as an integral part of both visualisations - not only to show the evolution by representing the progression of time, but also to highlight the changes that have occurred. Previously, the use of animation within software visualisation has been primarily restricted to small-scale, hand generated visualisations. However, this thesis shows the viability of using animation within software visualisation with automated visualisations on a large scale. In addition, evaluation of the visualisations has shown that they are suitable for showing the changes that have occurred in the software over a period of time, and subsequently how the software has evolved. These visualisations are therefore suitable for use by developers and managers involved with open source software. In addition, they also provide a basis for future research in evolutionary visualisations, software evolution and open source development

    Stereoscopic Sketchpad: 3D Digital Ink

    Get PDF
    --Context-- This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing. --Background-- When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed. While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based. --Method-- As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times. One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines. --Results-- The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame. The focus of this research was on how a 3D sketching package would go about storing and accessing the digital ink. This is just a basis for further research in this area and many issues touched upon in this paper will require a more in depth analysis. The primary area of this future research would be the creation of an effective user interface and the introduction of regular sketching package features such as the saving and loading of images

    Intuitive Robot Teleoperation through Multi-Sensor Informed Mixed Reality Visual Aids

    Get PDF
    © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.Mobile robotic systems have evolved to include sensors capable of truthfully describing robot status and operating environment as accurately and reliably as never before. This possibility is challenged by effective sensor data exploitation, because of the cognitive load an operator is exposed to, due to the large amount of data and time-dependency constraints. This paper addresses this challenge in remote-vehicle teleoperation by proposing an intuitive way to present sensor data to users by means of using mixed reality and visual aids within the user interface. We propose a method for organizing information presentation and a set of visual aids to facilitate visual communication of data in teleoperation control panels. The resulting sensor-information presentation appears coherent and intuitive, making it easier for an operator to catch and comprehend information meaning. This increases situational awareness and speeds up decision-making. Our method is implemented on a real mobile robotic system operating outdoor equipped with on-board internal and external sensors, GPS, and a reconstructed 3D graphical model provided by an assistant drone. Experimentation verified feasibility while intuitive and comprehensive visual communication was confirmed through a qualitative assessment, which encourages further developments.Peer reviewe

    3D APIs in Interactive Real-Time Systems: Comparison of OpenGL, Direct3D and Java3D.

    Get PDF
    Since the first display of a few computer-generated lines on a Cathode-ray tube (CRT) over 40 years ago, graphics has progressed rapidly towards the computer generation of detailed images and interactive environments in real time (Angel, 1997). In the last twenty years a number of Application Programmer's Interfaces (APIs) have been developed to provide access to three-dimensional graphics systems. Currently, there are numerous APIs used for many different types of applications. This paper will look at three of these: OpenGL, Direct3D, and one of the newest entrants, Java3D. They will be discussed in relation to their level of versatility, programability, and how innovative they are in introducing new features and furthering the development of 3D-interactive programming

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours
    • …
    corecore