25,165 research outputs found
Importance driven environment map sampling
In this paper we present an automatic and efficient method for supporting Image Based Lighting (IBL) for bidirectional methods which improves both the sampling of the environment, and the detection and sampling of important regions of the scene, such as windows and doors. These often have a small area proportional to that of the entire scene, so paths which pass through them are generated with a low probability. The method proposed in this paper improves this by taking into account view importance, and modifies the lighting distribution to use light transport information. This also automatically constructs a sampling distribution in locations which are relevant to the camera position, thereby improving sampling. Results are presented when our method is applied to bidirectional rendering techniques, in particular we show results for Bidirectional Path Tracing, Metropolis Light Transport and Progressive Photon Mapping. Efficiency results demonstrate speed up of orders of magnitude (depending on the rendering method used), when compared to other methods
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Recommendations and illustrations for the evaluation of photonic random number generators
The never-ending quest to improve the security of digital information
combined with recent improvements in hardware technology has caused the field
of random number generation to undergo a fundamental shift from relying solely
on pseudo-random algorithms to employing optical entropy sources. Despite these
significant advances on the hardware side, commonly used statistical measures
and evaluation practices remain ill-suited to understand or quantify the
optical entropy that underlies physical random number generation. We review the
state of the art in the evaluation of optical random number generation and
recommend a new paradigm: quantifying entropy generation and understanding the
physical limits of the optical sources of randomness. In order to do this, we
advocate for the separation of the physical entropy source from deterministic
post-processing in the evaluation of random number generators and for the
explicit consideration of the impact of the measurement and digitization
process on the rate of entropy production. We present the Cohen-Procaccia
estimate of the entropy rate as one way to do this. In order
to provide an illustration of our recommendations, we apply the Cohen-Procaccia
estimate as well as the entropy estimates from the new NIST draft standards for
physical random number generators to evaluate and compare three common optical
entropy sources: single photon time-of-arrival detection, chaotic lasers, and
amplified spontaneous emission
Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation
Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT
(CBCT) scans has become a serious concern. Patient-specific imaging dose
calculation has been proposed for the purpose of dose management. While Monte
Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers
from low computational efficiency. In response to this problem, we have
successfully developed a MC dose calculation package, gCTD, on GPU architecture
under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray
imaging dose received by a patient during a CT or CBCT scan. Techniques have
been developed particularly for the GPU architecture to achieve high
computational efficiency. Dose calculations using CBCT scanning geometry in a
homogeneous water phantom and a heterogeneous Zubal head phantom have shown
good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In
terms of improved efficiency, it is found that gCTD attains a speed-up of ~400
times in the homogeneous water phantom and ~76.6 times in the Zubal phantom
compared to EGSnrc. As for absolute computation time, imaging dose calculation
for the Zubal phantom can be accomplished in ~17 sec with the average relative
standard deviation of 0.4%. Though our gCTD code has been developed and tested
in the context of CBCT scans, with simple modification of geometry it can be
used for assessing imaging dose in CT scans as well.Comment: 18 pages, 7 figures, and 1 tabl
Refactoring, reengineering and evolution: paths to Geant4 uncertainty quantification and performance improvement
Ongoing investigations for the improvement of Geant4 accuracy and
computational performance resulting by refactoring and reengineering parts of
the code are discussed. Issues in refactoring that are specific to the domain
of physics simulation are identified and their impact is elucidated.
Preliminary quantitative results are reported.Comment: To be published in the Proc. CHEP (Computing in High Energy Physics)
201
- …