836 research outputs found
A Parallel Rendering Algorithm for MIMD Architectures
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well
Neural 3D Mesh Renderer
For modeling the 3D world behind 2D images, which 3D representation is most
appropriate? A polygon mesh is a promising candidate for its compactness and
geometric properties. However, it is not straightforward to model a polygon
mesh from 2D images using neural networks because the conversion from a mesh to
an image, or rendering, involves a discrete operation called rasterization,
which prevents back-propagation. Therefore, in this work, we propose an
approximate gradient for rasterization that enables the integration of
rendering into neural networks. Using this renderer, we perform single-image 3D
mesh reconstruction with silhouette image supervision and our system
outperforms the existing voxel-based approach. Additionally, we perform
gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and
3D DeepDream, with 2D supervision for the first time. These applications
demonstrate the potential of the integration of a mesh renderer into neural
networks and the effectiveness of our proposed renderer
Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX
A vital component of photo-realistic image synthesis is the simulation of
indirect diffuse reflections, which still remain a quintessential hurdle that
modern rendering engines struggle to overcome. Real-time applications typically
pre-generate diffuse lighting information offline using radiosity to avoid
performing costly computations at run-time. In this thesis we present a variant
of progressive refinement radiosity that utilizes Nvidia's novel RTX technology
to accelerate the process of form-factor computation without compromising on
visual fidelity. Through a modern implementation built on DirectX 12 we
demonstrate that offloading radiosity's visibility component to RT cores
significantly improves the lightmap generation process and potentially propels
it into the domain of real-time.Comment: 114 page
Real-Time Ray Traced Global Illumination using Fast Sphere Intersection Approximation for Dynamic Objects
Realistic lighting models are an important component of modern computer generated, interactive 3D applications. One of the more difficult to emulate aspects of real-world lighting is the concept of indirect lighting, often referred to as global illumination in computer graphics. Balancing speed and accuracy requires carefully considered trade-offs to achieve plausible results and acceptable framerates.
We present a novel technique of supporting global illumination within the constraints of the new DirectX Raytracing (DXR) API used with DirectX 12. By pre-computing spherical textures to approximate the diffuse color of dynamic objects, we build a smaller set of approximate geometry used for second bounce lighting calculations for diffuse light rays. This speeds up both the necessary intersection tests and the amount of geometry that needs to be updated within the GPU\u27s acceleration structure.
Our results show that our approach for diffuse bounced light is faster than using the conservative mesh for triangle-ray intersection in some cases. Since we are using this technique for diffuse bounced light the lower resolution of the spheres is close to the quality of traditional raytracing techniques for most materials
Flexible Smart Display with Integrated Graphics Rasterizor using Single Grain TFTs
Flexible electronics is a fast emerging market and includes electronics fabricated on flexible substrates, large area displays, low cost and disposable electronics. Both research and commercial institutions around the world have been trying to develop low temperature processes which will enable fabrication of electronic devices on arbitrary substrates including glass and plastic. While most of these technologies are still in the research phase, many approaches have shown promising results. One such technology is being developed in DIMES, TU Delft which uses single grain silicon crystals to fabricate Single Grain Thin Film Transistors (SG-TFTs) at plastic compatible temperatures. SG-TFTs and other similar technologies can potentially enable fabricating electronics directly on arbitrary substrates. This would further enable integration of embedded intelligence in devices that would enhance the current functionalists of displays. This paper is an effort in this direction as it undertakes a study to design a flexible display with an integrated graphics rasterizor unit. The paper introduces the novel idea to move parts of the graphics pipeline from the CPU/GPU to the display. This will add intelligence to the display so as to realize a smart-display. The paper proposes several architectures for implementing a rasterizor unit on smart-display, conceptually fabricated on a flexible substrate using SG-TFT technology. While the transistors fabricated with SG-TFT and similar technologies are relatively slower than the standard CMOS, this paper proposes and concludes that a tile based system design can potentially result into enhanced system performance
- …