145,094 research outputs found

    Analisis Rendering Video Animasi 3D Menggunakan Aplikasi Blender Berbasis Network Render

    Full text link
    Rendering is a process for generating a 2D or 3D image. The use of high computer specifications in the rendering process encourages the need for super computer provision. In this research is done rendering video 3D animation analysis using blender application based on network render. In this research used experimental research method, that is by giving variation of frame value to know execution time when done rendering process of 3D animation video. On each computer in the workstation is done Blender application configuration, as Master, Slave, and Client. The results show that 3D animation video rendering process has increased execution time along with increasing number of frames, but along with increasing number of Slave computer execution time of video rendering process of 3D animation has decreased

    Low-Bandwidth, Client-Based, Rendering for Gaming Videos

    Get PDF
    A system for low-bandwidth, client-based, rendering for gaming videos is described. The system may include a gaming device, server device, and user devices. The gaming device may include a processing device and graphics processing unit (GPU). The processing device receives user input and generates rendering commands from the user input. A first rendering unit of the GPU generates gaming video from the rendering commands. The server device receives the gaming video and the rendering commands from the gaming device. The server device determines the first user device is not compatible with the rendering commands, compresses the gaming video, and transmits the compressed gaming video to the first user device. The server device determines the second user device is compatible with the rendering commands and transmits the rendering commands to the second user device. The second rendering engine of the second user device generates rendered gaming video from the rendering commands

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    MoSculp: Interactive Visualization of Shape and Time

    Full text link
    We present a system that allows users to visualize complex human motion via 3D motion sculptures---a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes the motion sculptures and provides a user interface for rendering it in different styles, including the options to insert the sculpture back into the original video, render it in a synthetic scene or physically print it. To provide this end-to-end workflow, we introduce an algorithm that estimates that human's 3D geometry over time from a set of 2D images and develop a 3D-aware image-based rendering approach that embeds the sculpture back into the scene. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu

    An object-based approach to plenoptic videos

    Get PDF
    This paper proposes an object-based approach to plenoptic videos, where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map and other relevant information such as shape information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. A portable capturing system consisting of two linear camera arrays, each hosting 6 JVC video cameras, was developed to verify the proposed approach. Rendering and compression results of real-world scenes demonstrate the usefulness and good quality of the proposed approach. © 2005 IEEE.published_or_final_versio
    • …
    corecore