22,882 research outputs found

    Shaping the Future of Animation towards Role of 3D Simulation Technology in Animation Film and Television

    Get PDF
    The application of 3D simulation technology has revolutionized the field of animation film and television art, providing new possibilities and creative opportunities for visual storytelling. This research aims to explore the various aspects of applying 3D simulation technology in animation film and television art. It examines how 3D simulation technology enhances the creation of realistic characters, environments, and special effects, contributing to immersive and captivating storytelling experiences. The research also investigates the technical aspects of integrating 3D cloud simulation technology into the animation production pipeline, including modeling, texturing, rigging, and animation techniques. This paper explores the application of these optimization algorithms in the context of cloud-based 3D environments, focusing on enhancing the efficiency and performance of 3D simulations. Black Widow and Spider Monkey Optimization can be used to optimize the placement and distribution of 3D assets in cloud storage systems, improving data access and retrieval times. The algorithms can also optimize the scheduling of rendering tasks in cloud-based rendering pipelines, leading to more efficient and cost-effective rendering processes. The integration of 3D cloud environments and optimization algorithms enables real-time optimization and adaptation of 3D simulations. This allows for dynamic adjustments of simulation parameters based on changing conditions, resulting in improved accuracy and responsiveness. Moreover, it explores the impact of 3D cloud simulation technology on the artistic process, examining how it influences the artistic vision, aesthetics, and narrative possibilities in animation film and television. The research findings highlight the advantages and challenges of using 3D simulation technology in animation, shedding light on its potential future developments and its role in shaping the future of animation film and television art

    Point Cloud Framework for Rendering 3D Models Using Google Tango

    Get PDF
    This project seeks to demonstrate the feasibility of point cloud meshing for capturing and modeling three dimensional objects on consumer smart phones and tablets. Traditional methods of capturing objects require hundreds of images, are very slow and consume a large amount of cellular data for the average consumer. Software developers need a starting point for capturing and meshing point clouds to create 3D models as hardware manufacturers provide the tools to capture point cloud data. The project uses Googles Tango computer vision library for Android to capture point clouds on devices with depth-sensing hardware. The point clouds are combined and meshed as models for use in 3D rendering projects. We expect our results to be embraced by the Android market because capturing point clouds is fast and does not carry a large data footprint

    Interaction and locomotion techniques for the exploration of massive 3D point clouds in vr environments

    Get PDF
    Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90 fps instead of 30–60 fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach

    Improving Neural Radiance Field using Near-Surface Sampling with Point Cloud Generation

    Full text link
    Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and sampling is performed around there only. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and a state-of-the-art depth-based NeRF method. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.Comment: 13 figures, 2 table
    • …
    corecore