58 research outputs found

    Production of 3D animated short films in Unity 5 : can game engines replace the traditional methods?

    Get PDF
    In 3D animation cinema, the elements of a scene are created by artists using computer software. To generate the final result, there must be a conversion (rendering) of the threedimensional models to two-dimensional images (frames) that will later be joined together and edited into a video format. 3D animation films have traditionally been rendered using pre-rendering engines, a time consuming and expensive process that usually requires the use of multiple computers rendering at the same time (render farms), renders which may need to be repeated if the results are not ideal. Videogames, on the other hand, are reactive applications where the player may have different possible courses of action that will generate distinct results. In those cases, it is necessary that the engine waits for the player’s input before it calculates the following frames. To allow for fast calculations in real time, 3D game developers use game engines that incorporate real time rendering methods which can generate images much faster than the prerendering engines mentioned above. To be able to generate a large number of frames per second, there must be an optimization of the entire scene, in order to reduce the number of necessary calculations. That optimization is created by using techniques, practices and tools that are not commonly used by animation cinema professionals. Due to that optimization necessity, videogames always had a lower graphic quality than that of animated films, where each frame is rendered separately and takes as long as necessary to obtain the required result. Physically Based Rendering (PBR) technology is one of the methods incorporated by some rendering engines for the generation of physically accurate results, using calculations that follow the laws of physics as it happens in the real world and creating more realistic images which require less effort, not only from the artist but also from the equipment. The incorporation of PBR in game engines allowed for high graphic quality generated results in real time, gradually closing the visual quality gap between videogames and animated cinema. Recently, game engines such as Unity and Unreal Engine started to be used – mostly by the companies that created the engine, as a proof of concept – for rendering 3D animated films. This could lead to changes in the animation cinema production methods by the studios that, until now, have used traditional pre-rendering methods.No cinema de animação 3D, os elementos de uma cena são criados por artistas através da utilização de programas de computador. Para gerar o resultado final, é necessário fazer-se uma conversão (render) dos modelos tri-dimensionais para imagens bi-dimensionais (frames), que posteriormente serão unidas e editadas para um formato de vídeo. Tradicionalmente, o rendering de filmes de animação 3D é feita através de motores de pre-rendering, um processo demorado e dispendioso que geralmente requer a utilização de múltiplos computadores a trabalhar em simultâneo (render farms), e que poderá ter que ser repetido caso os resultados obtidos não sejam ideais. Os videojogos, por outro lado, são aplicações reactivas, onde o jogador pode ter várias sequências de acções, que poderão gerar resultados distintos. Nesses casos, é necessário o motor de jogo esperar pela acção do jogador antes de calcular as imagens seguintes. Para possibilitar cálculos rápidos em tempo-real, os criadores de jogos 3D usam motores de jogo que incorporam métodos de renderização em tempo-real que conseguem gerar imagens muito mais rápido do que os motores de pre-rendering mencionados acima. Para conseguir gerar um grande número de imagens por segundo, é necessário existir uma optimização de toda a cena, para reduzir o número de cálculos necessários. Essa optimização é criada através da utilização de técnicas, práticas e ferramentas que, geralmente, não são utiliadas por profissionais da área de cinema de animação. Devido a essa necessidade de optimização, os videojogos sempre tiveram uma qualidade gráfica inferior à dos filmes de animação, onde o render de cada imagem é gerado separadamente e pode levar tanto tempo quanto for necessário para obter o resultado desejado. A tecnologia de Rendering Baseado em Física (Physically Based Rendering – PBR) é um dos métodos incorporados por alguns motores de rendering para a geração de resultados físicamente correctos, usando cálculos que seguem as leis da física, tal como acontece no mundo real e criando imagens mais realistas necessitando de menos esforço, não só da parte do artista mas também do equipamento. A incorporação de PBR em motores de jogo possibilitou resultados gerados em tempo-real com grande qualidade gráfica, o que gradualmente vai aproximando a qualidade visual dos videojogos à do cinema de animação. Recentemente, motores de jogo como o Unity e o Unreal Engine começaram a ser utilizados – maioritariamente pelas companhias que criaram o motor de jogo, como prova de conceito – para renderização de filmes de animação 3D. Este passo poderá levar a mudanças nos métodos de produção do cinema de animação em estúdios que, até agora, utilizaram métodos de pré-renderização tradicionais

    HairBrush for Immersive Data-Driven Hair Modeling

    Get PDF
    International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques

    A Survey of Interaction Techniques and Devices for Large High Resolution Displays

    Get PDF
    Innovations in large high-resolution wall-sized displays have been yielding benefits to visualizations in industry and academia, leading to a rapidly growing increase of their implementations. In scenarios such as these, the displayed visual information tends to be larger than the users field of view, hence the necessity to move away from traditional interaction methods towards more suitable interaction devices and techniques. This paper aspires to explore the state-of-the-art with respect to such technologies for large high-resolution displays

    Applications of Face Analysis and Modeling in Media Production

    Get PDF
    Facial expressions play an important role in day-by-day communication as well as media production. This article surveys automatic facial analysis and modeling methods using computer vision techniques and their applications for media production. The authors give a brief overview of the psychology of face perception and then describe some of the applications of computer vision and pattern recognition applied to face recognition in media production. This article also covers the automatic generation of face models, which are used in movie and TV productions for special effects in order to manipulate people's faces or combine real actors with computer graphics

    3DCGキャラクタの表現の改善法と実時間操作に関する研究

    Get PDF
    早大学位記番号:新8176早稲田大

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

    Full text link
    The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes. Current techniques that utilize neural rendering for facilitating free-view videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes. ReRF explicitly models the residual information between adjacent timestamps in the spatial-temporal feature space, with a global coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a compact motion grid along with a residual feature grid to exploit inter-frame feature similarities. We show such a strategy can handle large motions without sacrificing quality. We further present a sequential training scheme to maintain the smoothness and the sparsity of the motion/residual grids. Based on ReRF, we design a special FVV codec that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes. Extensive experiments demonstrate the effectiveness of ReRF for compactly representing dynamic radiance fields, enabling an unprecedented free-viewpoint viewing experience in speed and quality.Comment: Accepted by CVPR 2023. Project page, see https://aoliao12138.github.io/ReRF

    Atmospheric cloud modeling methods in computer graphics: A review, trends, taxonomy, and future directions

    Get PDF
    The modeling of atmospheric clouds is one of the crucial elements in the natural phenomena visualization system. Over the years, a wide range of approaches has been proposed on this topic to deal with the challenging issues associated with visual realism and performance. However, the lack of recent review papers on the atmospheric cloud modeling methods available in computer graphics makes it difficult for researchers and practitioners to understand and choose the well-suited solutions for developing the atmospheric cloud visualization system. Hence, we conducted a comprehensive review to identify, analyze, classify, and summarize the existing atmospheric cloud modeling solutions. We selected 113 research studies from recognizable data sources and analyzed the research trends on this topic. We defined a taxonomy by categorizing the atmospheric cloud modeling methods based on the methods' similar characteristics and summarized each of the particular methods. Finally, we underlined several research issues and directions for potential future work. The review results provide an overview and general picture of the atmospheric cloud modeling methods that would be beneficial for researchers and practitioners
    corecore