77,004 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Remote Execution for 3D Graphics

    Get PDF
    Mobile clients such as PDAs, laptops, wrist watches, smart phones are rapidly emerging in the consumer market and an increasing number of graphics applications are being developed for them. However, current hardware technology limits the processing power on these mobile devices and wireless network bandwidth can be scarce and unreliable. A modern photorealistic graphics application is resource-hungry, consumes large amounts of cpu cycles, memory and network bandwidth if distributed. Besides running them on mobile devices may also diminish their battery power in the process. Bulk of graphics computations involve floating point operations and the lack of hardware support for such on PDAs imposes further restrictions. Remote execution, wherein part or the entire rendering process is offloaded to a powerful surrogate server is an attractive solution. We propose pipeline-splitting, a paradigm whereby 15 sub-stages of the graphics pipeline are isolated and instrumented with networking code such that it can run on either a graphics client or a surrogate server. To validate our concepts, we instrument Mesa3D, a popular implementation of the OpenGL graphics to support pipeline-splitting, creating Remote Mesa (RMesa). We further extend the Remote Execution model to provide an analytical model for predicting the rendering time and memory consumption involved in Remote Execution. Mobile devices have limited battery power. Therefore, it is important to understand if during Remote Execution, communication is more power consuming than computation. In order to study the same, we develop PowerSpy, a Real Time Power Profiler for I/O devices and applications. Finally, we add Remote Execution to an existing Distributed Graphics Framework targeted for mobile devices, namely, MADGRAF. In addition to Remote Execution, MADGRAF has another policy known as the Transcoder Based Approach in which the original 3D graphics image is modified to suite the mobile devices\u27 rendering capacity. Though this speeds up the rendering process, it affects photorealism. We propose an intelligent runtime decision making engine, Intelligraph, which evaluates the runtime performance of the mobile client and decides between Remote Execution and the Transcoder Based Approach

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Point Cloud Framework for Rendering 3D Models Using Google Tango

    Get PDF
    This project seeks to demonstrate the feasibility of point cloud meshing for capturing and modeling three dimensional objects on consumer smart phones and tablets. Traditional methods of capturing objects require hundreds of images, are very slow and consume a large amount of cellular data for the average consumer. Software developers need a starting point for capturing and meshing point clouds to create 3D models as hardware manufacturers provide the tools to capture point cloud data. The project uses Googles Tango computer vision library for Android to capture point clouds on devices with depth-sensing hardware. The point clouds are combined and meshed as models for use in 3D rendering projects. We expect our results to be embraced by the Android market because capturing point clouds is fast and does not carry a large data footprint

    GPU-based Image Analysis on Mobile Devices

    Get PDF
    With the rapid advances in mobile technology many mobile devices are capable of capturing high quality images and video with their embedded camera. This paper investigates techniques for real-time processing of the resulting images, particularly on-device utilizing a graphical processing unit. Issues and limitations of image processing on mobile devices are discussed, and the performance of graphical processing units on a range of devices measured through a programmable shader implementation of Canny edge detection.Comment: Proceedings of Image and Vision Computing New Zealand 201
    corecore