235 research outputs found

    Image-based crowd rendering

    Full text link

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Efficient modeling of entangled details for natural scenes

    Get PDF
    Proceedings of Pacific Graphics 2016 (Okinawa)International audienceDigital landscape realism often comes from the multitude of details that are hard to model such as fallen leaves, rock piles orentangled fallen branches. In this article, we present a method for augmenting natural scenes with a huge amount of details suchas grass tufts, stones, leaves or twigs. Our approach takes advantage of the observation that those details can be approximatedby replications of a few similar objects and therefore relies on mass-instancing. We propose an original structure, the GhostTile, that stores a huge number of overlapping candidate objects in a tile, along with a pre-computed collision graph. Detailsare created by traversing the scene with the Ghost Tile and generating instances according to user-defined density fields thatallow to sculpt layers and piles of entangled objects while providing control over their density and distribution

    Removing non visible objects in scenes of clusters of particles

    Get PDF
    El objeto de este trabajo es resolver, en parte, la problemática de la visualización de conglomerados de partículas resultantes de métodos numéricos en la ingeniería. Se han desarrollado dos métodos para la eliminación de partículas no visibles desde cualquier punto de observación de la cámara. Estos se distinguen según el tipo de información disponible: modelos de partículas con información del contorno (malla de superficie) y modelos sin información del contorno (métodos sin mallas). En ambos casos, los resultados son buenos, pues se logra eliminar gran cantidad de partículas que no influyen en la imagen final.The aim of this paper is to partially solve the problem of visualization of clusters of particles resulting from numerical methods in engineering. Two methods for removing non-visible particles from some view point of the camera were developed. These are divided depending on the type of information available: particle systems with contour information (surface mesh) and particle systems without contour information (methods without mesh). In both cases, the results are very good, it is achieved removing large amount of particles that do not affect the final image, permitting interact with the results of numerical methods.Peer Reviewe

    Wind and nebula of the M33 variable GR290 (WR/LBV)

    Full text link
    Context: GR290 (M33/V532=Romano's Star) is a suspected post-LBV star located in M33 galaxy that shows a rare Wolf-Rayet spectrum during its minimum light phase. In spite of many studies, its atmospheric structure, its circumstellar environment and its place in the general context of massive stars evolution is poorly known. Aims: Detailed study of its wind and mass loss, and study of the circumstellar environment associated to the star. Methods: Long-slit spectra of GR290 were obtained during its present minimum luminosity phase with the GTC together with contemporaneous BVRI photometry. The data were compared with non-LTE model atmosphere synthetic spectra computed with CMFGEN and with CLOUDY models for ionized interstellar medium regions. Results: The current mV=18.8m_V=18.8 mag, is the faintest at which this source has ever been observed. The non-LTE models indicate effective temperature TeffT_{eff}=27-30 kK at radius R2/3R_{2/3}=27-21 Rsun and mass loss rate M˙=1.5×105\dot{M}=1.5\times10^{-5} Msun yr1^{-1}. The terminal wind speed VV_\infty=620 km s1{\rm km~s^{-1}} is faster than ever before recorded while the current luminosity L=(3.13.7)×105L_*=(3.1-3.7)\times 10^5 Lsun is the lowest ever deduced. It is overabundant in He and N and underabundant in C and O. It is surrounded by an unresolved compact HII region with dimensions \leq4 pc, from where H-Balmer, HeI lines and [OIII] and [NII] are detected. In addition, we find emission from a more extended interstellar medium (ISM) region which appears to be asymmetric, with a larger extent to the East (16-40 pc) than to the West. Conclusions: In the present long lasting visual minimum, GR290 is in a lower bolometric luminosity state with higher mass loss rate. The nearby nebular emission seems to suggest that the star has undergone significant mass loss over the past 10410510^4-10^5 years and is nearing the end stages of its evolution.Comment: submitted to A&A, 12 pages, 9 figures, 7 table

    Preserving attribute values on simplified meshes by re-sampling detail textures

    Get PDF
    Many sophisticated solutions have been proposed to reduce the geometric complexity of 3D meshes. A slightly less studied problem is how to preserve attribute detail on simplified meshes (e.g., color, high-frequency shape details, scalar fields, etc.).We present a general approach that is completely independent of the simplification technique adopted to reduce the mesh size. We use resampled textures (rgb, bump, displacement or shade maps) to decouple attribute detail representation from geometry simplification. The original contribution is that preservation is performed after simplification by building a set of triangular texture patches that are then packed into a single texture map. This general solution can be applied to the output of any topology-preserving simplification code and it allows any attribute value defined on the high-resolution mesh to be recovered. Moreover, decoupling shape simplification from detail preservation (and encoding the latter with texture maps) leads to high simplification rates and highly efficient rendering. We also describe an alternative application: the conversion of 3D models with 3D procedural textures (which generally force the use of software renderers) into standard 3D models with 2D bitmap textures

    Atmospheric cloud representation methods in computer graphics: A review

    Get PDF
    Cloud representation is one of the important components in the atmospheric cloud visualization system. Lack of review papers on the cloud representation methods available in the area of computer graphics has directed towards the difficulty for researchers to understand the appropriate solutions. Therefore, this paper aims to provide a comprehensive review of the atmospheric cloud representation methods that have been proposed in the computer graphics domain, involving the classical and the current state-of-the-art approaches. The reviewing process was conducted by searching, selecting, and analyzing the prominent articles collected from online digital libraries and search engines. We highlighted the taxonomic classification of the existing cloud representation methods in solving the atmospheric cloud-related problems. Finally, research issues and directions in the area of cloud representations and visualization have been discussed. This review would be significantly beneficial for researchers to clearly understand the general picture of the existing methods and thus helping them in choosing the best-suited approach for their future research and development

    Reward Enhances Online Participants’ Engagement With a Demanding Auditory Task

    Get PDF
    Online recruitment platforms are increasingly used for experimental research. Crowdsourcing is associated with numerous benefits but also notable constraints, including lack of control over participants’ environment and engagement. In the context of auditory experiments, these limitations may be particularly detrimental to threshold-based tasks that require effortful listening. Here, we ask whether incorporating a performance-based monetary bonus improves speech reception performance of online participants. In two experiments, participants performed an adaptive matrix-type speech-in-noise task (where listeners select two key words out of closed sets). In Experiment 1, our results revealed worse performance in online (N = 49) compared with in-lab (N = 81) groups. Specifically, relative to the in-lab cohort, significantly fewer participants in the online group achieved very low thresholds. In Experiment 2 (N = 200), we show that a monetary reward improved listeners’ thresholds to levels similar to those observed in the lab setting. Overall, the results suggest that providing a small performance-based bonus increases participants’ task engagement, facilitating a more accurate estimation of auditory ability under challenging listening conditions
    corecore