2,894 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    High-efficiency texture coding and synthesis on point-based pear surface

    Get PDF
    © 2017 IOS Press and the authors. The fruit images on points cloud acquired by the current 3D scanner from field will appear a visible seams, inconvenient data acquisition or taking large space due to unorganized background. We give a SAOW method to cope with the space efficiency and realistic effects of texture synthesis on pear point model. At first, a point-quadtree is proposed to simplify the pear image division. Then, an adaptive multi-granularity morton coding scheme are presented to optimizing the memory space of pear image. At last, weighted oversampling mixing method is mainly focused on texture quality of pear surface. As shown in the experiment results, our adaptive division makes the memory space decline dramatically about 90.7% than non-division and 92.9% than general division respectively; adaptive code scheme helps to reduce the memory to 72.1% of ordinary morton code; weighted oversampling keeps the mixed texture more real and smoothly than current methods

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Rigidity controllable as-rigid-as-possible shape deformations

    Get PDF
    Shape deformation is one of the fundamental techniques in geometric processing. One principle of deformation is to preserve the geometric details while distributing the necessary distortions uniformly. To achieve this, state-of-the-art techniques deform shapes in a locally as-rigid-as-possible (ARAP) manner. Existing ARAP deformation methods optimize rigid transformations in the 1-ring neighborhoods and maintain the consistency between adjacent pairs of rigid transformations by single overlapping edges. In this paper, we make one step further and propose to use larger local neighborhoods to enhance the consistency of adjacent rigid transformations. This is helpful to keep the geometric details better and distribute the distortions more uniformly. Moreover, the size of the expanded local neighborhoods provides an intuitive parameter to adjust physical stiffness. The larger the neighborhood is, the more rigid the material is. Based on these, we propose a novel rigidity controllable mesh deformation method where shape rigidity can be flexibly adjusted. The size of the local neighborhoods can be learned from datasets of deforming objects automatically or specified by the user, and may vary over the surface to simulate shapes composed of mixed materials. Various examples are provided to demonstrate the effectiveness of our method

    Streaming of Plants in Distributed Virtual Environments

    Get PDF
    International audienceJust as in the real world, plants are important objects in virtual world for creating pleasant and realistic environments, especially those involving natural scenes. As such, much effort has been made in realistic modeling of plants. As the trend moves towards networked and distributed virtual environment, however, the current models are inadequate as they are not designed for progressive transmissions. In this paper, we fill in this gap by proposing a progressive representation for plants based on generalized cylinders. To facilitate the transmission of the plants, we quantify the visual contribution of each branch and use this weight in packet scheduling. We show the efficiency of our representations and effectiveness of our packet scheduler through simulations

    Authoring virtual crowds: a survey

    Get PDF
    Recent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic,natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment(e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providingtheir capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is acomplex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to deliveringintuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowdsimulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation andvisualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influencethat the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical userinterfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for futureresearch that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkƂodowska Curie grant agreement No 860768 (CLIPE project). This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 739578 and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital PolicyPeer ReviewedPostprint (author's final draft
    • 

    corecore