18 research outputs found

    Real-Time Volumetric Shadows using 1D Min-Max Mipmaps

    Get PDF
    Light scattering in a participating medium is responsible for several important effects we see in the natural world. In the presence of occluders, computing single scattering requires integrating the illumination scattered towards the eye along the camera ray, modulated by the visibility towards the light at each point. Unfortunately, incorporating volumetric shadows into this integral, while maintaining real-time performance, remains challenging. In this paper we present a new real-time algorithm for computing volumetric shadows in single-scattering media on the GPU. This computation requires evaluating the scattering integral over the intersections of camera rays with the shadow map, expressed as a 2D height field. We observe that by applying epipolar rectification to the shadow map, each camera ray only travels through a single row of the shadow map (an epipolar slice), which allows us to find the visible segments by considering only 1D height fields. At the core of our algorithm is the use of an acceleration structure (a 1D minmax mipmap) which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. The simplicity of this data structure and its traversal allows for efficient implementation using only pixel shaders on the GPU

    Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware

    Full text link

    Accelerating Radio Wave Propagation Algorithms by Implementation on Graphics Hardware

    Get PDF
    Radio wave propagation prediction is a fundamental prerequisite for planning, analysis and optimization of radio networks. For instance coverage analysis, interference estimation or channel and power allocation all rely on propagation predictions. In wireless communication networks optimal antenna sites are determined by either conducting a serie

    A scalable, efficient, and accurate solution to non-rigid structure from motion

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Most Non-Rigid Structure from Motion (NRSfM) solutions are based on factorization approaches that allow reconstructing objects parameterized by a sparse set of 3D points. These solutions, however, are low resolution and generally, they do not scale well to more than a few tens of points. While there have been recent attempts at bringing NRSfM to a dense domain, using for instance variational formulations, these are computationally demanding alternatives which require certain spatial continuity of the data, preventing their use for articulated shapes with large deformations or situations with multiple discontinuous objects. In this paper, we propose incorporating existing point trajectory low-rank models into a probabilistic framework for matrix normal distributions. With this formalism, we can then simultaneously learn shape and pose parameters using expectation maximization, and easily exploit additional priors such as known point correlations. While similar frameworks have been used before to model distributions over shapes, here we show that formulating the problem in terms of distributions over trajectories brings remarkable improvements, especially in generality and efficiency. We evaluate the proposed approach in a variety of scenarios including one or multiple objects, sparse or dense reconstructions, missing observations, mild or sharp deformations, and in all cases, with minimal prior knowledge and low computational cost.Peer ReviewedPostprint (author's final draft

    GPU ray tracing with CUDA

    Get PDF
    Ray Tracing is a rendering method that generates high quality images by simulating how light rays interact with objects in a virtual scene. The ray tracing technique can accurately portray advanced optical effects, such as reflections, refractions, and shadows, but at a greater computational cost and rendering time than other rendering methods. Fortunately, technological advances in GPU computing have provided the means to accelerate the ray tracing process to produce images in a significantly shorter time. This paper attempts to clearly illustrate the difference in rendering speed and design by developing and comparing a sequential CPU and parallel GPU implementation of a ray tracer, written in C++ and CUDA respectively. A performance analysis reveals that the optimized GPU ray tracer is capable of producing images with speedup gains up to 1852X when compared to the former CPU implementation --Document

    Approximating Signed Distance Field to a Mesh by Artificial Neural Networks

    Get PDF
    Previous research has resulted in many representations of surfaces for rendering. However, for some approaches, an accurate representation comes at the expense of large data storage. Considering that Artifcial Neural Networks (ANNs) have been shown to achieve good performance in approximating non-linear functions in recent years, the potential to apply them to the problem of surface representation needs to be investigated. The goal in this research is to exploring how ANNs can effciently learn the Signed Distance Field (SDF) representation of shapes. Specifcally, we investigate how well different architectures of ANNs can learn 2D SDFs, 3D SDFs, and SDFs approximating a complex triangle mesh. In this research, we performed three main experiments to determine which ANN architectures and confgurations are suitable for learning SDFs by analyzing the errors in training and testing as well as rendering results. Also, three different pipelines for rendering general SDFs, grid-based SDFs, and ANN based SDFs were implemented to show the resulting images on screen. The following data are measured in this research project: the errors in training different architectures of ANNs; the errors in rendering SDFs; comparison between grid-based SDFs and ANN based SDFs. This work demonstrates the use of using ANNs to approximate the SDF to a mesh by learning the dataset through training data sampled near the mesh surface, which could be a useful technique in 3D reconstruction and rendering. We have found that the size of trained neural network is also much smaller than either the triangle mesh or grid-based SDFs, which could be useful for compression applications, and in software or hardware that has a strict requirement of memory size

    Efficient Rasterization for Outdoor Radio Wave Propagation

    Full text link

    Bumpmapping-Verfahren und deren Weiterentwicklung

    No full text
    Das Bumpmapping-Verfahren, eine Methode zur realistischen Darstellung rauer Oberflächen, existiert schon seit 30 Jahren, aber erst durch aktuelle Entwicklungen der Hardware lässt es sich in Echtzeitumgebungen einsetzen. Die aktuellen Verfahren ermöglichen viele darüber hinausgehende Effekte, jedoch haben sie auch mit Problemen zu kämpfen. Das Ziel dieser Diplomarbeit ist die Weiterentwicklung der Verfahren zu betrachten. In dieser Arbeit werden die Grenzen der aktuellen Bumpmapping-Algorithmen aufgezeigt und nach neuen Wegen geforscht. Das erste Verfahren erzeugt durch ein Multipassrendern fraktale Landschaften im Shader. Die darin verwendeten Methoden lassen sich für einen weiteren Algorithmus nutzen, mit dem feine Unebenheiten der Oberfläche an jedem Pixel ausgewertet werden. So können anisotrope Materialien wie gebürstete Metalle oder Mikropartikellacke simuliert werden. Den Abschluss bilden zwei neue Verfahren für prozedurale Shader. Die zu imitierende Oberfläche wird im Modell nachgebildet und per Raytracing für jeden Oberflächenpixel ausgewertet. Durch diese Methode werden viele Probleme texturbasierter Verfahren komplett umgangen.The bumpmapping algorithm produces realistically looking rendered images of wrinkled surfaces. It was invented almost 30 years ago. Only quite recently though the algorithm could be implemented on realtime rendering hardware. State of the Art implementations are capable of creating various additional effects but they are not free of problems. The main goal of this work is to explore further enhancements of bumpmapping algorithms. The first presented method creates fractal landscapes in a multi-pass render. Parts of this algorithm can be used in a second shader which creates a sub-pixel surface per pixel and uses it´s normals to create anisotrop material effects such as brushed metal or microparticle paint. The last algorithms shown use an alternative way for procedural shaders: the surface of the material is modeled inside the shader and raytraced per pixel. This method avoids almost all problems common with texture-based algorithms

    Portal-s: High-resolution real-time 3D video telepresence

    Get PDF
    The goal of telepresence is to allow a person to feel as if they are present in a location other than their true location; a common application of telepresence is video conferencing in which live video of a user is transmitted to a remote location for viewing. In conventional two-dimensional (2D) video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Newer systems are being developed which allow for three-dimensional (3D) video conferencing, circumventing issues with this disparity, but new challenges are arising in the capture, delivery, and redisplay of 3D contents across existing infrastructure. To address these challenges, a novel system is proposed which allows for 3D video conferencing across existing networks while delivering full resolution 3D video and establishing correct eye gaze. During the development of Portal-s, many innovations to the field of 3D scanning and its applications were made; specifically, this dissertation research has achieved the following innovations: a technique to realize 3D video processing entirely on a graphics processing unit (GPU), methods to compress 3D videos on a GPU, and combination of the aforementioned innovations with a special holographic display hardware system to enable the novel 3D telepresence system entitled Portal-s. The first challenge this dissertation addresses is the cost of real-time 3D scanning technology, both from a monetary and computing power perspective. New advancements in 3D scanning and computation technology are continuing to increase, simplifying the acquisition and display of 3D data. These advancements are allowing users new methods of interaction and analysis of the 3D world around them. Although the acquisition of static 3D geometry is becoming easy, the same cannot be said of dynamic geometry, since all aspects of the 3D processing pipeline, capture, processing, and display, must be realized in real-time simultaneously. Conventional approaches to solve these problems utilize workstation computers with powerful central processing units (CPUs) and GPUs to accomplish the large amounts of processing power required for a single 3D frame. A challenge arises when trying to realize real-time 3D scanning on commodity hardware such as a laptop computer. To address the cost of a real-time 3D scanning system, an entirely parallel 3D data processing pipeline that makes use of a multi-frequency phase-shifting technique is presented. This novel processing pipeline can achieve simultaneous 3D data capturing, processing, and display at 30 frames per second (fps) on a laptop computer. By implementing the pipeline within the OpenGL Shading Language (GLSL), nearly any modern computer with a dedicated graphics device can run the pipeline. Making use of multiple threads sharing GPU resources and direct memory access transfers, high frame rates on low compute power devices can be achieved. Although these advancements allow for low compute power devices such as a laptop to achieve real-time 3D scanning, this technique is not without challenges. The main challenge being selecting frequencies that allow for high quality phase, yet do not include phase jumps in equivalent frequencies. To address this issue, a new modified multi-frequency phase shifting technique was developed that allows phase jumps to be introduced in equivalent frequencies yet unwrapped in parallel, increasing phase quality and reducing reconstruction error. Utilizing these techniques, a real-time 3D scanner was developed that captures 3D geometry at 30 fps with a root mean square error (RMSE) of 0:00081 mm for a measurement area of 100 mm X 75 mm at a resolution of 800 X 600 on a laptop computer. With the above mentioned pipeline the CPU is nearly idle, freeing it to perform additional tasks such as image processing and analysis. The second challenge this dissertation addresses is associated with delivering huge amounts of 3D video data in real-time across existing network infrastructure. As the speed of 3D scanning continues to increase, and real-time scanning is achieved on low compute power devices, a way of compressing the massive amounts of 3D data being generated is needed. At a scan resolution of 800 X 600, streaming a 3D point cloud at 30 frames per second (FPS) would require a throughput of over 1.3 Gbps. This amount of throughput is large for a PCIe bus, and too much for most commodity network cards. Conventional approaches involve serializing the data into a compressible state such as a polygon file format (PLY) or Wavefront object (OBJ) file. While this technique works well for structured 3D geometry, such as that created with computer aided drafting (CAD) or 3D modeling software, this does not hold true for 3D scanned data as it is inherently unstructured. A challenge arises when trying to compress this unstructured 3D information in such a way that it can be easily utilized with existing infrastructure. To address the need for real-time 3D video compression, new techniques entitled Holoimage and Holovideo are presented, which have the ability to compress, respectively, 3D geometry and 3D video into 2D counterparts and apply both lossless and lossy encoding. Similar to the aforementioned 3D scanning pipeline, these techniques make use of a completely parallel pipeline for encoding and decoding; this affords high speed processing on the GPU, as well as compression before streaming the data over the PCIe bus. Once in the compressed 2D state, the information can be streamed and saved until the 3D information is needed, at which point 3D geometry can be reconstructed while maintaining a low amount of reconstruction error. Further enhancements of the technique have allowed additional information, such as texture information, to be encoded by reducing the bit rate of the data through image dithering. This allows both the 3D video and associated 2D texture information to be interlaced and compressed into 2D video, synchronizing the streams automatically. The third challenge this dissertation addresses is achieving correct eye gaze in video conferencing. In 2D video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Conventional approaches to mitigate this issue involve either reducing the angle of disparity between the axes by increasing the distance of the user to the system, or merging the axes through the use of beam splitters. Newer approaches to this issue make use of 3D capture and display technology, as the angle of disparity can be corrected through transforms of the 3D data. Challenges arise when trying to create such novel systems, as all aspects of the pipeline, capture, transmission, and redisplay must be simultaneously achieved in real-time with the massive amounts of 3D data. Finally, the Portal-s system is presented, which is an integration of all the aforementioned technologies into a holistic software and hardware system that enables real-time 3D video conferencing with correct mutual eye gaze. To overcome the loss of eye contact in conventional video conferencing, Portal-s makes use of dual structured-light scanners that capture through the same optical axis as the display. The real-time 3D video frames generated on the GPU are then compressed using the Holovideo technique. This allows the 3D video to be streamed across a conventional network or the Internet, and redisplayed at a remote node for another user on the Holographic display glass. Utilizing two connected Portal-s nodes, users of the systems can engage in 3D video conferencing with natural eye gaze established. In conclusion, this dissertation research substantially advances the field of real-time 3D scanning and its applications. Contributions of this research span into both academic and industrial practices, where the use of this information has allowed users new methods of interaction and analysis of the 3D world around them
    corecore