2,042 research outputs found
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Master of Science in Computing
thesisThis document introduces the Soft Shadow Mip-Maps technique, which consists of three methods for overcoming the fundamental limitations of filtering-oriented soft shadows. Filtering-oriented soft shadowing techniques filter shadow maps with varying filter sizes determined by desired penumbra widths. Different varieties of this approach have been commonly applied in interactive and real-time applications. Nonetheless, they share some fundamental limitations. First, soft shadow filter size is not always guaranteed to be the correct size for producing the right penumbra width based on the light source size. Second, filtering with large kernels for soft shadows requires a large number of samples, thereby increasing the cost of filtering. Stochastic approximations for filtering introduce noise and prefiltering leads to inaccuracies. Finally, calculating shadows based on a single blocker estimation can produce significantly inaccurate penumbra widths when the shadow penumbras of different blockers overlap. We discuss three methods to overcome these limitations. First, we introduce a method for computing the soft shadow filter size for a receiver with a blocker distance. Then, we present a filtering scheme based on shadow mip-maps. Mipmap-based filtering uses shadow mip-maps to efficiently generate soft shadows using a constant size filter kernel for each layer, and linear interpolation between layers. Finally, we introduce an improved blocker estimation approach. With the improved blocker estimaiton, we explore the shadow contribution of every blocker by calculating the light occluded by potential blockers. Hence, the calculated penumbra areas correspond to the blockers correctly. Finally, we discuss how to select filter kernels for filtering. These approaches successively solve issues regarding shadow penumbra width calculation apparent in prior techniques. Our result shows that we can produce correct penumbra widths, as evident in our comparisons to ray-traced soft shadows. Nonetheless, the Soft Shadow Mip-Maps technique suffers from light bleeding issues. This is because our method only calculates shadows using the geometry that is available in the shadow depth map. Therefore, the occluded geometry is not taken into consideration, which leads to light bleeding. Another limitation of our method is that using lower resolution shadow mip-map layers limits the resolution of the shadow placement. As a result, when a blocker moves slowly, its shadow follows it with discrete steps, the size of which is determined by the corresponding mip-map layer resolution
Recommended from our members
Cloud base height estimates from sky imagery and a network of pyranometers
Cloud base height (CBH) is an important parameter for physics-based high resolution solar radiation modeling. In sky imager-based forecasts, a ceilometer or stereographic setup is needed to derive the CBH; otherwise erroneous CBHs lead to incorrect physical cloud velocity and incorrect projection of cloud shadows, causing solar power forecast errors due to incorrect shadow positions and timing of shadowing events. In this paper, two methods to estimate cloud base height from a single sky imager and distributed ground solar irradiance measurements are proposed. The first method (Time Series Correlation, denoted as “TSC”) is based upon the correlation between ground-observed global horizontal irradiance (GHI) time series and a modeled GHI time series generated from a sequence of sky images geo-rectified to a candidate set of CBH. The estimated CBH is taken as the candidate that produces the highest correlation coefficient. The second method (Geometric Cloud Shadow Edge, denoted as “GCSE”) integrates a numerical ramp detection method for ground-observed GHI time series with solar and cloud geometry applied to cloud edges in a sky image. CBH are benchmarked against a collocated ceilometer and stereographically estimated CBH from two sky imagers for 15 min median-filtered CBHs. Over 30 days covering all seasons, the TSC method performs similarly to the GCSE method with nRMSD of 18.9% versus 20.8%. A key limitation of both proposed methods is the requirement of sufficient variation in GHI to enable reliable correlation and ramp detection. The advantage of the two proposed methods is that they can be applied when measurements from only a single sky imager and pyranometers are available
Real-Time Hair Filtering with Convolutional Neural Networks
Rendering of realistic-looking hair is in general still too costly to do in real-time applications, from simulating the physics to rendering the fine details required for it to look natural, including self-shadowing.We show how an autoencoder network, that can be evaluated in real time, can be trained to filter an image of few stochastic samples, including self-shadowing, to produce a much more detailed image that takes into account real hair thickness and transparency
Effects of Local Soil Conditions on the Topographic Aggravation of Seismic Motion: Parametric Investigation and Recorded Field Evidence from the 1999 Athens Earthquake
During the 1999 Athens earthquake, the town of Adàmes, located on the eastern side of the Kifissos river canyon, experienced unexpectedly heavy damage. Despite the particular geometry of the slope that caused significant motion amplification, topography effects alone cannot explain the uneven damage distribution within a 300-m zone parallel to the canyon’s crest, which is characterized by a rather uniform structural quality. In this article, we illustrate the important role of soil stratigraphy and material heterogeneity on the topographic aggravation of surface ground motion. For this purpose, we first conduct an extensive time-domain parametric study using idealized stratified profiles and Gaussian stochastic fields to characterize the spatial distribution of soil properties, and using Ricker wavelets to describe the seismic input motion; the results show that both topography and local soil conditions significantly affect the spatial variability of seismic motion. We next perform elastic two-dimensional wave propagation analyses based on available local geotechnical and seismological data and validate our results by comparison with aftershock recordings
Beyond the Spectral Theorem: Spectrally Decomposing Arbitrary Functions of Nondiagonalizable Operators
Nonlinearities in finite dimensions can be linearized by projecting them into
infinite dimensions. Unfortunately, often the linear operator techniques that
one would then use simply fail since the operators cannot be diagonalized. This
curse is well known. It also occurs for finite-dimensional linear operators. We
circumvent it by developing a meromorphic functional calculus that can
decompose arbitrary functions of nondiagonalizable linear operators in terms of
their eigenvalues and projection operators. It extends the spectral theorem of
normal operators to a much wider class, including circumstances in which poles
and zeros of the function coincide with the operator spectrum. By allowing the
direct manipulation of individual eigenspaces of nonnormal and
nondiagonalizable operators, the new theory avoids spurious divergences. As
such, it yields novel insights and closed-form expressions across several areas
of physics in which nondiagonalizable dynamics are relevant, including
memoryful stochastic processes, open non unitary quantum systems, and
far-from-equilibrium thermodynamics.
The technical contributions include the first full treatment of arbitrary
powers of an operator. In particular, we show that the Drazin inverse,
previously only defined axiomatically, can be derived as the negative-one power
of singular operators within the meromorphic functional calculus and we give a
general method to construct it. We provide new formulae for constructing
projection operators and delineate the relations between projection operators,
eigenvectors, and generalized eigenvectors.
By way of illustrating its application, we explore several, rather distinct
examples.Comment: 29 pages, 4 figures, expanded historical citations;
http://csc.ucdavis.edu/~cmg/compmech/pubs/bst.ht
Acoustic characterization of void distributions across carbon-fiber composite layers
Carbon Fiber Reinforced Polymer (CFRP) composites are often used as aircraft structural components, mostly due to their superior mechanical properties. In order to improve the efficiency of these structures, it is important to detect and characterize any defects occurring during the manufacturing process, removing the need to mitigate the risk of defects through increased thicknesses of structure. Such defects include porosity, which is well-known to reduce the mechanical performance of composite structures, particularly the inter-laminar shear strength. Previous work by the authors has considered the determination of porosity distributions in a fiber-metal laminate structure [1]. This paper investigates the use of wave-propagation modeling to invert the ultrasonic response and characterize the void distribution within the plies of a CFRP structure. Finite Element (FE) simulations are used to simulate the ultrasonic response of a porous composite laminate to a typical transducer signal. This simulated response is then applied as input data to an inversion method to calculate the distribution of porosity across the layers. The inversion method is a multi-dimensional optimization utilizing an analytical model based on a normal-incidence plane-wave recursive method and appropriate mixture rules to estimate the acoustical properties of the structure, including the effects of plies and porosity. The effect of porosity is defined through an effective wave-number obtained from a scattering model description. Although a single-scattering approach is applied in this initial study, the limitations of the method in terms of the considered porous layer, percentage porosity and void radius are discussed in relation to single- and multiple-scattering methods. A comparison between the properties of the modeled structure and the void distribution obtained from the inversion is discussed. This work supports the general study of the use of ultrasound methods with inversion to characterize material properties and any defects occurring in composites structures in three dimensions. This research is part of a Fellowship in Manufacturing funded by the UK Engineering and Physical Sciences Research Council (EPSRC) aimed at underpinning the design of more efficient composite structures and reducing the environmental impact of travel
Fast and interactive ray-based rendering
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDespite their age, ray-based rendering methods are still a very active field of research
with many challenges when it comes to interactive visualization. In this thesis, we
present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering.
Our system for Guided High-Quality Rendering allows for guiding the sampling
rate of ray-based rendering methods by a user-specified Region of Interest (RoI).
We propose two interaction methods for setting such an RoI when using a large
display system and a desktop display, respectively. This makes it possible to compute
images with a heterogeneous sample distribution across the image plane. Using
such a non-uniform sample distribution, the rendering performance inside the RoI
can be significantly improved in order to judge specific image features. However, a
modified scheduling method is required to achieve sufficient performance. To solve
this issue, we developed a scheduling method based on sparse matrix compression,
which has shown significant improvements in our benchmarks. By filtering the
sparsely sampled image appropriately, large brightness variations in areas outside
the RoI are avoided and the overall image brightness is similar to the ground truth
early in the rendering process.
When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness.
This is a challenging task when moving through highly complex environments and
the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the
user’s gaze, measured with an eye tracker integrated into the HMD. In order to
avoid disturbances through visual artifacts from low sampling rates, we introduce
a reprojection-based rendering pipeline that allows for fast rendering and temporal
accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded
eye tracking data in order to determine tracking accuracy and connections between
different fixation modes and perceived quality, leading to surprising insights.
For previewing global illumination of a scene interactively by allowing for free scene
exploration, we present a hash-based caching system. Building upon the concept
of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are
supported by our hybrid reconstruction approach that allows for the visualization of
view-dependent effects. In addition to our caching and reconstruction technique, we
introduce a novel layered filtering framework, acting as a hybrid method between
path space and image space filtering, that allows for the high-quality denoising of
non-diffuse materials. Also, being designed as a framework instead of a concrete
filtering method, it is possible to adapt most available denoising methods to our
layered approach instead of relying only on the filtering of primary hitpoints
Robust object-based algorithms for direct shadow simulation
En informatique graphique, les algorithmes de générations d'ombres évaluent la quantité de lumière directement perçue par une environnement virtuel. Calculer précisément des ombres est cependant coûteux en temps de calcul. Dans cette dissertation, nous présentons un nouveau système basé objet robuste, qui permet de calculer des ombres réalistes sur des scènes dynamiques et ce en temps interactif. Nos contributions incluent notamment le développement de nouveaux algorithmes de génération d'ombres douces ainsi que leur mise en oeuvre efficace sur processeur graphique. Nous commençons par formaliser la problématique du calcul d'ombres directes. Tout d'abord, nous définissons ce que sont les ombres directes dans le contexte général du transport de la lumière. Nous étudions ensuite les techniques interactives qui génèrent des ombres directes. Suite à cette étude nous montrons que mêmes les algorithmes dit physiquement réalistes se reposent sur des approximations. Nous mettons également en avant, que malgré leur contraintes géométriques, les algorithmes d'ombres basées objet sont un bon point de départ pour résoudre notre problématique de génération efficace et robuste d'ombres directes. Basé sur cette observation, nous étudions alors le système basé objet existant et mettons en avant ses problèmes de robustesse. Nous proposons une nouvelle technique qui améliore la qualité des ombres générées par ce système en lui ajoutant une étape de mélange de pénombres. Malgré des propriétés et des résultats convaincants, les limitations théoriques et de mise en oeuvre limite la qualité générale et les performances de cet algorithme. Nous présentons ensuite un nouvel algorithme d'ombres basées objet. Cet algorithme combine l'efficacité de l'approche basée objet temps réel avec la précision de sa généralisation au rendu hors ligne. Notre algorithme repose sur l'évaluation locale du nombre d'objets entre deux points : la complexité de profondeur. Nous décrivons comment nous utilisons cet algorithme pour échantillonner la complexité de profondeur entre les surfaces visibles d'une scène et une source lumineuse. Nous générons ensuite des ombres à partir de cette information soit en modulant l'éclairage direct soit en intégrant numériquement l'équation d'illumination directe. Nous proposons ensuite une extension de notre algorithme afin qu'il puisse prendre en compte les ombres projetées par des objets semi-opaque. Finalement, nous présentons une mise en oeuvre efficace de notre système qui démontre que des ombres basées objet peuvent être générées de façon efficace et ce même sur une scène dynamique. En rendu temps réel, il est commun de représenter des objets très détaillés encombinant peu de triangles avec des textures qui représentent l'opacité binaire de l'objet. Les techniques de génération d'ombres basées objet ne traitent pas
de tels triangles dit "perforés". De par leur nature, elles manipulent uniquement les géométries explicitement représentées par des primitives géométriques. Nous présentons une nouvel algorithme basé objet qui lève cette limitation. Nous soulignons que notre méthode peut être efficacement combinée avec les systèmes existants afin de proposer un système unifié basé objet qui génère des ombres à la fois pour des maillages classiques et des géométries
perforées. La mise en oeuvre proposée montre finalement qu'une telle combinaison fournit une solution élégante, efficace et robuste à la problématique générale de l'éclairage direct et ce aussi bien pour des applications temps réel que des applications sensibles à la la précision du résultat.Direct shadow algorithms generate shadows by simulating the direct lighting interaction in a virtual environment. The main challenge with the accurate direct shadow problematic is its computational cost. In this dissertation, we develop a new robust object-based shadow framework that provides realistic shadows at interactive frame rate on dynamic scenes. Our contributions include new robust object-based soft shadow algorithms and efficient interactive implementations. We start, by formalizing the direct shadow problematic. Following the light transport problematic, we first formalize what are robust direct shadows. We then study existing interactive direct shadow techniques and outline that the real time direct shadow simulation remains an open problem. We show that even the so called physically plausible soft shadow algorithms still rely on approximations. Nevertheless we exhibit that, despite their geometric constraints, object-based approaches seems well suited when targeting accurate solutions. Starting from the previous analyze, we investigate the existing object-based shadow framework and discuss about its robustness issues. We propose a new technique that drastically improve the resulting shadow quality by improving this framework with a penumbra blending stage. We present a practical implementation of this approach. From the obtained results, we outline that, despite desirable properties, the inherent theoretical and implementation limitations reduce the overall quality and performances of the proposed algorithm. We then present a new object-based soft shadow algorithm. It merges the efficiency of the real time object-based shadows with the accuracy of its offline generalization. The proposed algorithm lies onto a new local evaluation of the number of occluders between points (\ie{} the depth complexity). We describe how we use this algorithm to sample the depth complexity between any visible receiver and the light source. From this information, we compute shadows by either modulate the direct lighting or numerically solve the direct illumination with an accuracy depending on the light sampling strategy. We then propose an extension of our algorithm in order to handle shadows cast by semi opaque occluders. We finally present an efficient implementation of this framework that demonstrates that object-based shadows can be efficiently used on complex dynamic environments. In real time rendering, it is common to represent highly detailed objects with few triangles and transmittance textures that encode their binary opacity. Object-based techniques do not handle such perforated triangles. Due to their nature, they can only evaluate the shadows cast by models whose their shape is explicitly defined by geometric primitives. We describe a new robust object-based algorithm that addresses this main limitation. We outline that this method can be efficiently combine with object-based frameworks in order to evaluate approximative shadows or simulate the direct illumination for both common meshes and perforated triangles. The proposed implementation shows that such combination provides a very strong and efficient direct lighting framework, well suited to many domains ranging from quality sensitive to performance critical applications
- …