4,442 research outputs found
Web-Based Visualization of Very Large Scientific Astronomy Imagery
Visualizing and navigating through large astronomy images from a remote
location with current astronomy display tools can be a frustrating experience
in terms of speed and ergonomics, especially on mobile devices. In this paper,
we present a high performance, versatile and robust client-server system for
remote visualization and analysis of extremely large scientific images.
Applications of this work include survey image quality control, interactive
data query and exploration, citizen science, as well as public outreach. The
proposed software is entirely open source and is designed to be generic and
applicable to a variety of datasets. It provides access to floating point data
at terabyte scales, with the ability to precisely adjust image settings in
real-time. The proposed clients are light-weight, platform-independent web
applications built on standard HTML5 web technologies and compatible with both
touch and mouse-based devices. We put the system to the test and assess the
performance of the system and show that a single server can comfortably handle
more than a hundred simultaneous users accessing full precision 32 bit
astronomy data.Comment: Published in Astronomy & Computing. IIPImage server available from
http://iipimage.sourceforge.net . Visiomatic code and demos available from
http://www.visiomatic.org
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
Web-based Stereoscopic Collaboration for Medical Visualization
Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden.
Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt?
Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich.
Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie
Remote rendering for virtual reality on mobile devices
Nowadays it is possible to launch complicated VR applications on mobile devices, using simple VR goggles, e.g. Google Cardboard. Nevertheless, this opportunity has not been introduced to the wide use yet. One of the reasons is the low processing power even of the hi-end devices. This is a massive obstacle for mobile VR technologies. One of the solutions is to render the high-quality 3D world on a remote server, streaming the video to the mobile device
Software-Enhanced Teaching and Visualization Capabilities of an Ultra-High-Resolution Video Wall
This paper presents a modular approach to enhance the capabilities and
features of a visualization and teaching room using software. This approach was
applied to a room with a large, high resolution (76804320 pixels),
tiled screen of 13 7.5 feet as its main display, and with a variety of
audio and video inputs, connected over a network. Many of the techniques
described are possible because of a software-enhanced setup, utilizing existing
hardware and a collection of mostly open-source tools, allowing to perform
collaborative, high-resolution visualizations as well as broadcasting and
recording workshops and lectures. The software approach is flexible and allows
one to add functionality without changing the hardware.Comment: PEARC'19: "Practice and Experience in Advanced Research Computing",
July 28-August 1, 2019 - Chicago, IL, US
Doctor of Philosophy
dissertationDataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms
Literature Review of Mixed Reality Research
In the global context, while mixed reality has been an emerging concept for
years, recent technological and scientific advancements have now made it poised
to revolutionize industries and daily life by offering enhanced functionalities
and improved services. Besides reviewing the highly cited papers in the last 20
years among over a thousand research papers on mixed reality, this systematic
review provides the state-of-the-art applications and utilities of the mixed
reality by primarily scrutinizing the associated papers in 2022 and 2023.
Focusing on the potentials that this technology have in providing digitally
supported simulations and other utilities in the era of large language models,
highlighting the potential and limitations of the innovative solutions and also
bringing focus to emerging research directions, such as telemedicine, remote
control and optimization of direct volume rendering. The paper's associated
repository is publicly accessible at https://aizierjiang.github.io/mr
Methods and design issues for next generation network-aware applications
Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios
Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments
Despite the impressive progress of telepresence systems for room-scale scenes
with static and dynamic scene entities, expanding their capabilities to
scenarios with larger dynamic environments beyond a fixed size of a few
square-meters remains challenging.
In this paper, we aim at sharing 3D live-telepresence experiences in
large-scale environments beyond room scale with both static and dynamic scene
entities at practical bandwidth requirements only based on light-weight scene
capture with a single moving consumer-grade RGB-D camera. To this end, we
present a system which is built upon a novel hybrid volumetric scene
representation in terms of the combination of a voxel-based scene
representation for the static contents, that not only stores the reconstructed
surface geometry but also contains information about the object semantics as
well as their accumulated dynamic movement over time, and a point-cloud-based
representation for dynamic scene parts, where the respective separation from
static parts is achieved based on semantic and instance information extracted
for the input frames. With an independent yet simultaneous streaming of both
static and dynamic content, where we seamlessly integrate potentially moving
but currently static scene entities in the static model until they are becoming
dynamic again, as well as the fusion of static and dynamic data at the remote
client, our system is able to achieve VR-based live-telepresence at close to
real-time rates. Our evaluation demonstrates the potential of our novel
approach in terms of visual quality, performance, and ablation studies
regarding involved design choices
- …