135 research outputs found
Parallel Rendering and Large Data Visualization
We are living in the big data age: An ever increasing amount of data is being
produced through data acquisition and computer simulations. While large scale
analysis and simulations have received significant attention for cloud and
high-performance computing, software to efficiently visualise large data sets
is struggling to keep up.
Visualization has proven to be an efficient tool for understanding data, in
particular visual analysis is a powerful tool to gain intuitive insight into
the spatial structure and relations of 3D data sets. Large-scale visualization
setups are becoming ever more affordable, and high-resolution tiled display
walls are in reach even for small institutions. Virtual reality has arrived in
the consumer space, making it accessible to a large audience.
This thesis addresses these developments by advancing the field of parallel
rendering. We formalise the design of system software for large data
visualization through parallel rendering, provide a reference implementation of
a parallel rendering framework, introduce novel algorithms to accelerate the
rendering of large amounts of data, and validate this research and development
with new applications for large data visualization. Applications built using
our framework enable domain scientists and large data engineers to better
extract meaning from their data, making it feasible to explore more data and
enabling the use of high-fidelity visualization installations to see more
detail of the data.Comment: PhD thesi
Digital twin-enabled human-robot collaborative teaming towards sustainable and healthy built environments
Development of sustainable and healthy built environments (SHBE) is highly advocated to achieve collective societal good. Part of the pathway to SHBE is the engagement of robots to manage the ever-complex facilities for tasks such as inspection and disinfection. However, despite the increasing advancements of robot intelligence, it is still “mission impossible” for robots to independently undertake such open-ended problems as facility management, calling for a need to “team up” the robots with humans. Leveraging digital twin's ability to capture real-time data and inform decision-making via dynamic simulation, this study aims to develop a human-robot teaming framework for facility management to achieve sustainability and healthiness in the built environments. A digital twin-enabled prototype system is developed based on the framework. Case studies showed that the framework can safely and efficiently incorporate robotics into facility management tasks (e.g., patrolling, inspection, and cleaning) by allowing humans to plan, oversee, manage, and cooperate with the robot via the digital twin's bi-directional mechanism. The study lays out a high-level framework, under which purposeful efforts can be made to unlock digital twin's full potential in collaborating humans and robots in facility management towards SHBE
Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design
Graphics Technology in Space Applications (GTSA 1989)
This document represents the proceedings of the Graphics Technology in Space Applications, which was held at NASA Lyndon B. Johnson Space Center on April 12 to 14, 1989 in Houston, Texas. The papers included in these proceedings were published in general as received from the authors with minimum modifications and editing. Information contained in the individual papers is not to be construed as being officially endorsed by NASA
Contributions to the cornerstones of interaction in visualization: strengthening the interaction of visualization
Visualization has become an accepted means for data exploration and analysis. Although interaction is an important component of visualization approaches, current visualization research pays less attention to interaction than to aspects of the graphical representation. Therefore, the goal of this work is to strengthen the interaction side of visualization. To this end, we establish a unified view on interaction in visualization. This unified view covers four cornerstones: the data, the tasks, the technology, and the human.Visualisierung hat sich zu einem unverzichtbaren Werkzeug für die Exploration und Analyse von Daten entwickelt. Obwohl Interaktion ein wichtiger Bestandteil solcher Werkzeuge ist, wird der Interaktion in der aktuellen Visualisierungsforschung weniger Aufmerksamkeit gewidmet als Aspekten der graphischen Repräsentation. Daher ist es das Ziel dieser Arbeit, die Interaktion im Bereich der Visualisierung zu stärken. Hierzu wird eine einheitliche Sicht auf Interaktion in der Visualisierung entwickelt
Multimodal metaphors for generic interaction tasks in virtual environments
Virtual Reality (VR) Systeme bieten zusätzliche Ein- und Ausgabekanäle für die Interaktion zwischen Mensch und Computer in virtuellen Umgebungen. Solche VR Technologien ermöglichen den Anwendern bessere Einblicke in hochkomplexe Datenmengen, stellen allerdings auch hohe Anforderungen an den Benutzer bezüglich der Fähigkeiten mit virtuellen Objekten zu interagieren. In dieser Arbeit werden sowohl die Entwicklung und Evaluierung neuer multimodaler Interaktionsmetaphern für generische Interaktionsaufgaben in virtuellen Umgebungen vorgestellt und diskutiert. Anhand eines VR Systems wird der Einsatz dieser Konzepte an zwei Fallbeispielen aus den Domänen der 3D-Stadtvisualisierung und seismischen Volumendarstellung aufgezeigt
Endoscopic Targeting Tasks Simulator: An Approach Using Game Engines
The pervasiveness of simulators used in professions requiring the skilled control of expensive machinery such as is the case in the aviation, mining, construction, and naval industries raises an intriguing question about the relatively poor adoption within the field of medicine. Certain surgical procedures such as neuro-endoscopic and laparoscopic lend themselves well to the application of virtual reality based simulators. This is due to the innate ability to decom- pose these complex macro level procedures into a hierarchy of subtasks that can be modelled in a software simulator to augment existing teaching and training techniques.
The research in this thesis is focused with the design and implementation of a targeting- based simulator having applications in the evaluation of clinically relevant procedures within the neuro-endoscopic and potentially laparoscopic domains. Existing commercially available surgical simulators within these domains are often associated with being expensive, narrowly focussed in the skills they train, and fail to show statistically significant results in the efficacy of improving user performance through repeated use.
Development of a targeting tasks simulator is used to evaluate what methods can be applied to provide a robust, objective measure of human performance as it relates to targeting tasks. In addition to performance evaluation, further research is conducted to help understand the impact of different input modalities; focusing primarily on input from a gamepad style device and as well a newer, more natural user interface provided by the Leap Motion Controller
A web-based approach to engineering adaptive collaborative applications
Current methods employed to develop collaborative applications have to make
decisions and speculate about the environment in which the application will operate
within, the network infrastructure that will be used and the device type the application
will operate on. These decisions and assumptions about the environment in which
collaborative applications were designed to work are not ideal. These methods produce
collaborative applications that are characterised as being inflexible, working on
homogeneous networks and single platforms, requiring pre-existing knowledge of the
data and information types they need to use and having a rigid choice of architecture.
On the other hand, future collaborative applications are required to be flexible; to work
in highly heterogeneous environments; be adaptable to work on different networks and
on a range of device types. This research investigates the role that the Web and its
various pervasive technologies along with a component-based Grid middleware can
play to address these concerns. The aim is to develop an approach to building adaptive
collaborative applications that can operate on heterogeneous and changing
environments. This work proposes a four-layer model that developers can use to build
adaptive collaborative applications. The four-layer model is populated with Web
technologies such as Scalable Vector Graphics (SVG), the Resource Description
Framework (RDF), Protocol and RDF Query Language (SPARQL) and Gridkit, a
middleware infrastructure, based on the Open Overlays concept. The Middleware layer
(the first layer of the four-layer model) addresses network and operating system
heterogeneity, the Group Communication layer enables collaboration and data sharing,
while the Knowledge Representation layer proposes an interoperable RDF data
modelling language and a flexible storage facility with an adaptive architecture for
heterogeneous data storage. And finally there is the Presentation and Interaction layer
which proposes a framework (Oea) for scalable and adaptive user interfaces. The four layer
model has been successfully used to build a collaborative application, called
Wildfurt that overcomes challenges facing collaborative applications. This research has
demonstrated new applications for cutting-edge Web technologies in the area of
building collaborative applications. SVG has been used for developing superior
adaptive and scalable user interfaces that can operate on different device types. RDF
and RDFS, have also been used to design and model collaborative applications
providing a mechanism to define classes and properties and the relationships between
them. A flexible and adaptable storage facility that is able to change its architecture
based on the surrounding environments and requirements has also been achieved by
combining the RDF technology with the Open Overlays middleware, Gridkit
Visualization and Modelling of Molecules and Crystalles
Aplikace pro vizualizaci a modelování molekul nejsou dosud příliš poznamenány současným hardware vyvinutým pro potřeby počítačových her. Cílem projektu je navrhnout intuitivní rozhraní s novými widgety specializovanými na atomové struktury a vizualizací využívající moderní hardware grafických karet. Důležitou částí je také dosažení vysoké přesnosti modelování, obvykle dostupné pouze u profesionálních CAD programů.Applications for visualization and modelling of molecules are not using the full potential of modern graphics hardware developed for demanding needs of computer games. The goal of this project is to design an intuitive interface including new widgets specialized to atomic structure editing with visualization which uses the hardware available in modern graphics cards. It is particularly focused on high precision of modeling, usually available only in professional CAD programs.
Interactive Video Game Content Authoring using Procedural Methods
This thesis explores avenues for improving the quality and detail of game graphics, in the context of constraints that are common to most game development studios. The research begins by identifying two dominant constraints; limitations in the capacity of target gaming hardware/platforms, and processes that hinder the productivity of game art/content creation. From these constraints, themes were derived which directed the research‟s focus. These include the use of algorithmic or „procedural‟ methods in the creation of graphics content for games, and the use of an „interactive‟ content creation strategy, to better facilitate artist production workflow. Interactive workflow represents an emerging paradigm shift in content creation processes used by the industry, which directly integrates game rendering technology into the content authoring process. The primary motivation for this is to provide „high frequency‟ visual feedback that enables artists to see games content in context, during the authoring process. By merging these themes, this research develops a production strategy that takes advantage of „high frequency feedback‟ in an interactive workflow, to directly expose procedural methods to artists‟, for use in the content creation process. Procedural methods have a characteristically small „memory footprint‟ and are capable of generating massive volumes of data. Their small „size to data volume‟ ratio makes them particularly well suited for use in game rendering situations, where capacity constraints are an issue. In addition, an interactive authoring environment is well suited to the task of setting parameters for procedural methods, reducing a major barrier to their acceptance by artists. An interactive content authoring environment was developed during this research. Two algorithms were designed and implemented. These algorithms provide artists‟ with abstract mechanisms which accelerate common game content development processes; namely object placement in game environments, and the delivery of variation between similar game objects. In keeping with the theme of this research, the core functionality of these algorithms is delivered via procedural methods. Through this, production overhead that is associated with these content development processes is essentially offloaded from artists onto the processing capability of modern gaming hardware. This research shows how procedurally based content authoring algorithms not only harmonize with the issues of hardware capacity constraints, but also make the authoring of larger and more detailed volumes of games content more feasible in the game production process. Algorithms and ideas developed during this research demonstrate the use of procedurally based, interactive content creation, towards improving detail and complexity in the graphics of games
- …