489 research outputs found
Remote rendering for virtual reality on mobile devices
Nowadays it is possible to launch complicated VR applications on mobile devices, using simple VR goggles, e.g. Google Cardboard. Nevertheless, this opportunity has not been introduced to the wide use yet. One of the reasons is the low processing power even of the hi-end devices. This is a massive obstacle for mobile VR technologies. One of the solutions is to render the high-quality 3D world on a remote server, streaming the video to the mobile device
Scalable Remote Rendering using Synthesized Image Quality Assessment
Depth-image-based rendering (DIBR) is widely used to support 3D interactive graphics on low-end mobile devices. Although it reduces the rendering cost on a mobile device, it essentially turns such a cost into depth image transmission cost or bandwidth consumption, inducing performance bottleneck to a remote rendering system. To address this problem, we design a scalable remote rendering framework based on synthesized image quality assessment. Specially, we design an efficient synthesized image quality metric based on Just Noticeable Distortion (JND), properly measuring human perceived geometric distortions in synthesized images. Based on this, we predict quality-aware reference viewpoints, with viewpoint intervals optimized by the JND-based metric. An adaptive transmission scheme is also developed to control depth image transmission based on perceived quality and network bandwidth availability. Experiment results show that our approach effectively reduces transmission frequency and network bandwidth consumption with perceived quality on mobile devices maintained. A prototype system is implemented to demonstrate the scalability of our proposed framework to multiple clients
Interacting with New York City Data by HoloLens through Remote Rendering
In the digital era, Extended Reality (XR) is considered the next frontier.
However, XR systems are computationally intensive, and they must be implemented
within strict latency constraints. Thus, XR devices with finite computing
resources are limited in terms of quality of experience (QoE) they can offer,
particularly in cases of big 3D data. This problem can be effectively addressed
by offloading the highly intensive rendering tasks to a remote server.
Therefore, we proposed a remote rendering enabled XR system that presents the
3D city model of New York City on the Microsoft HoloLens. Experimental results
indicate that remote rendering outperforms local rendering for the New York
City model with significant improvement in average QoE by at least 21%.
Additionally, we clarified the network traffic pattern in the proposed XR
system developed under the OpenXR standard
Remote rendering control using Python scripts and Dropbox technology
The process of rendering a 3D animation often takes a very long time to complete. In situations where it would take several hours or even many days, it is inconvenient to spend that time near the rendering computer in order to control and oversee the process. Through the work with 3D computer graphic technologies, the authors realized that there is no simple solution on the market that facilitates the monitoring of the remote computer that is running the rendering process. This paper deals with developing a system to enable those tasks to be done from a remote computer or any mobile device. The developed proof-of-concept system consists of two Python programs communicating over the Dropbox service and a computer that is running the Autodesk Maya software
Prediction-Based Prefetching for Remote Rendering Streaming in Mobile Virtual Environments
Remote Image-based rendering (IBR) is the most suitable solution for rendering complex 3D scenes on mobile devices, where the server renders the 3D scene and streams the rendered images to the client. However, sending a large number of images is inefficient due to the possible limitations of wireless connections. In this paper, we propose a prefetching scheme at the server side that predicts client movements and hence prefetches the corresponding images. In addition, an event-driven simulator was designed and implemented to evaluate the performance of the proposed scheme. The simulator was used to compare between prediction-based prefetching and prefetching images based on spatial locality. Several experiments were conducted to study the performance with different movement patterns as well as with different virtual environments (VEs). The results have shown that the hit ratio of the prediction-based scheme is greater than the localization scheme in the case of random and circular walk movement patterns by approximately 35% and 17%, respectively. In addition, for a VE with high level of details, the proposed scheme outperforms the localization scheme by approximately 13%. However, for a VE with low level of details the localization based scheme outperforms the proposed scheme by only 5%
- ā¦