43 research outputs found
Playing for Data: Ground Truth from Computer Games
Recent progress in computer vision has been driven by high-capacity models
trained on large datasets. Unfortunately, creating large datasets with
pixel-level labels has been extremely costly due to the amount of human effort
required. In this paper, we present an approach to rapidly creating
pixel-accurate semantic label maps for images extracted from modern computer
games. Although the source code and the internal operation of commercial games
are inaccessible, we show that associations between image patches can be
reconstructed from the communication between the game and the graphics
hardware. This enables rapid propagation of semantic labels within and across
images synthesized by the game, with no access to the source code or the
content. We validate the presented approach by producing dense pixel-level
semantic annotations for 25 thousand images synthesized by a photorealistic
open-world computer game. Experiments on semantic segmentation datasets show
that using the acquired data to supplement real-world images significantly
increases accuracy and that the acquired data enables reducing the amount of
hand-labeled real-world data: models trained with game data and just 1/3 of the
CamVid training set outperform models trained on the complete CamVid training
set.Comment: Accepted to the 14th European Conference on Computer Vision (ECCV
2016
Parallel Write-Efficient Algorithms and Data Structures for Computational Geometry
In this paper, we design parallel write-efficient geometric algorithms that
perform asymptotically fewer writes than standard algorithms for the same
problem. This is motivated by emerging non-volatile memory technologies with
read performance being close to that of random access memory but writes being
significantly more expensive in terms of energy and latency. We design
algorithms for planar Delaunay triangulation, -d trees, and static and
dynamic augmented trees. Our algorithms are designed in the recently introduced
Asymmetric Nested-Parallel Model, which captures the parallel setting in which
there is a small symmetric memory where reads and writes are unit cost as well
as a large asymmetric memory where writes are times more expensive
than reads. In designing these algorithms, we introduce several techniques for
obtaining write-efficiency, including DAG tracing, prefix doubling,
reconstruction-based rebalancing and -labeling, which we believe will
be useful for designing other parallel write-efficient algorithms
Bounding Volume Hierarchies of Slab Cut Balls
We introduce a bounding volume hierarchy based on the Slab Cut Ball. This novel type of enclosing shape provides an attractive balance between tightness of fit, cost of overlap testing, and memory requirement. The hierarchy construction algorithm includes a new method for the construction of tight bounding volumes in worst case O(n) time, which means our tree data structure is constructed in O(n log n) time using traditional top-down building methods. A fast overlap test method between two slab cut balls is also proposed, requiring as few as 28-99 arithmetic operations, including the transformation cost. Practical collision detection experiments confirm that our tree data structure is amenable for high performance collision queries. In all the tested benchmarks, our bounding volume hierarchy consistently gives performance improvements over the sphere tree, and it is also faster than the OBB tree in five out of six scenes. In particular, our method is asymptotically faster than the sphere tree, and it also outperforms the OBB tree, in close proximity situations
Soft Shadow Volumes for Ray Tracing
International audienceWe present a new, fast algorithm for rendering physically-based soft shadows in ray tracing-based renderers. Our method replaces the hundreds of shadow rays commonly used in stochastic ray tracers with a single shadow ray and a local reconstruction of the visibility function. Compared to tracing the shadow rays, our algorithm produces exactly the same image while executing one to two orders of magnitude faster in the test scenes used. Our first contribution is a two-stage method for quickly determining the silhouette edges that overlap an area light source, as seen from the point to be shaded. Secondly, we show that these partial silhouettes of occluders, along with a single shadow ray, are sufficient for reconstructing the visibility function between the point and the light source
Parallel Poisson disk sampling
Sampling is important for a variety of graphics applications include rendering, imaging, and geometry processing. However, producing sample sets with desired efficiency and blue noise statistics has been a major challenge, as existing methods are either sequential with limited speed, or are parallel but only through pre-computed datasets and thus fall short in producing samples with blue noise statistics. We present a Poisson disk sampling algorithm that runs in parallel and produces all samples on the fly with desired blue noise properties. Our main idea is to subdivide the sample domain into grid cells and we draw samples concurrently from multiple cells that are sufficiently far apart so that their samples cannot conflict one another. We present a parallel implementation of our algorithm running on a GPU with constant cost per sample and constant number of computation passes for a target number of samples. Our algorithm also works in arbitrary dimension, and allows adaptive sampling from a user-specified importance field. Furthermore, our algorithm is simple and easy to implement, and runs faster than existing techniques. © 2008 ACM.link_to_subscribed_fulltex