477 research outputs found

    Real-time Shadows for Gigapixel Displacement Maps

    Get PDF
    Shadows portray helpful information in scenes. From a scientific visualization standpoint, they help to add data without unnecessary clutter. In video games they add realism and depth. In common graphics pipelines, due to the independent and parallel rendering of geometric primitives, shadows are difficult to achieve. Objects require knowledge of each other and therefore multiple renders are needed to collect the necessary data. The collection of this data comes with its own set of trade offs. Our research involves adding shadows into a lunar rendering framework developed by Dr. Robert Kooima. The NASA-collected data contains a multi-gigapixel displacement map describing the lunar topology. This map does not fit entirely into main memory and therefore out-of-core paging is utilized to achieve real-time speeds. Current shadow techniques do not attempt to generate occluder data on such a scale, and therefore we have developed a novel approach to fit this situation. By using a chain of pre-processing steps, we analyze the structure of the displacement map and calculate horizon lines at each vertex. This information is saved into several images and used to generate shadows in a single pass, maintaining real-time speeds. The algorithm is even capable of generating soft shadows without extra information or loss of speed. We compare our algorithm with common approaches in the field as well as two forms of ground truth; one from ray tracing and the other from the gigapixel lunar texture data, showing real shadows at the time it was collected

    Image Quality Metrics for Stochastic Rasterization

    Get PDF
    We develop a simple perceptual image quality metric for images resulting from stochastic rasterization. The new metric is based on the frequency selectivity of cortical cells, using ideas derived from existing perceptual metrics and research of the human visual system. Masking is not taken into account in the metric, since it does not have a significant effect in this specific application. The new metric achieves high correlation with results from HDR-VDP2 while being conceptually simple and accurately reflecting smaller quality differences than the existing metrics. In addition to HDR-VDP2, measurement results are compared against MS-SSIM results. The new metric is applied to a set of images produced with different sampling schemes to provide quantitative information about the relative quality, strengths, and weaknesses of the different sampling schemes. Several purpose-built three-dimensional test scenes are used for this quality analysis in addition to a few widely used natural scenes. The star discrepancy of sampling patterns is found to be correlated to the average perceptual quality, even though discrepancy can not be recommended as the sole method for estimating perceptual quality. A hardware-friendly low-discrepancy sampling scheme achieves generally good results, but the quality difference to simpler per-pixel stratified sampling decreases as the sample count increases. A comprehensive mathematical model of rendering discrete frames from dynamic 3D scenes is provided as background to the quality analysis

    Visual Data Representation using Context-Aware Samples

    Get PDF
    The rapid growth in the complexity of geometry models has necessisated revision of several conventional techniques in computer graphics. At the heart of this trend is the representation of geometry with locally constant approximations using independent sample primitives. This generally leads to a higher sampling rate and thus a high cost of representation, transmission, and rendering. We advocate an alternate approach involving context-aware samples that capture the local variation of the geometry. We detail two approaches; one, based on differential geometry and the other based on statistics. Our differential-geometry-based approach captures the context of the local geometry using an estimation of the local Taylor's series expansion. We render such samples using programmable Graphics Processing Unit (GPU) by fast approximation of the geometry in the screen space. The benefits of this representation can also be seen in other applications such as simulation of light transport. In our statistics-based approach we capture the context of the local geometry using Principal Component Analysis (PCA). This allows us to achieve hierarchical detail by modeling the geometry in a non-deterministic fashion as a hierarchical probability distribution. We approximate the geometry and its attributes using quasi-random sampling. Our results show a significant rendering speedup and savings in the geometric bandwidth when compared to current approaches

    Enabling Neural Radiance Fields (NeRF) for Large-scale Aerial Images -- A Multi-tiling Approach and the Geometry Assessment of NeRF

    Full text link
    Neural Radiance Fields (NeRF) offer the potential to benefit 3D reconstruction tasks, including aerial photogrammetry. However, the scalability and accuracy of the inferred geometry are not well-documented for large-scale aerial assets,since such datasets usually result in very high memory consumption and slow convergence.. In this paper, we aim to scale the NeRF on large-scael aerial datasets and provide a thorough geometry assessment of NeRF. Specifically, we introduce a location-specific sampling technique as well as a multi-camera tiling (MCT) strategy to reduce memory consumption during image loading for RAM, representation training for GPU memory, and increase the convergence rate within tiles. MCT decomposes a large-frame image into multiple tiled images with different camera models, allowing these small-frame images to be fed into the training process as needed for specific locations without a loss of accuracy. We implement our method on a representative approach, Mip-NeRF, and compare its geometry performance with threephotgrammetric MVS pipelines on two typical aerial datasets against LiDAR reference data. Both qualitative and quantitative results suggest that the proposed NeRF approach produces better completeness and object details than traditional approaches, although as of now, it still falls short in terms of accuracy.Comment: 9 Figur

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    Interactive visualization tools for topological exploration

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1992This thesis concerns using computer graphics methods to visualize mathematical objects. Abstract mathematical concepts are extremely difficult to visualize, particularly when higher dimensions are involved; I therefore concentrate on subject areas such as the topology and geometry of four dimensions which provide a very challenging domain for visualization techniques. In the first stage of this research, I applied existing three-dimensional computer graphics techniques to visualize projected four-dimensional mathematical objects in an interactive manner. I carried out experiments with direct object manipulation and constraint-based interaction and implemented tools for visualizing mathematical transformations. As an application, I applied these techniques to visualizing the conjecture known as Fermat's Last Theorem. Four-dimensional objects would best be perceived through four-dimensional eyes. Even though we do not have four-dimensional eyes, we can use computer graphics techniques to simulate the effect of a virtual four-dimensional camera viewing a scene where four-dimensional objects are being illuminated by four-dimensional light sources. I extended standard three-dimensional lighting and shading methods to work in the fourth dimension. This involved replacing the standard "z-buffer" algorithm by a "w-buffer" algorithm for handling occlusion, and replacing the standard "scan-line" conversion method by a new "scan-plane" conversion method. Furthermore, I implemented a new "thickening" technique that made it possible to illuminate surfaces correctly in four dimensions. Our new techniques generate smoothly shaded, highlighted view-volume images of mathematical objects as they would appear from a four-dimensional viewpoint. These images reveal fascinating structures of mathematical objects that could not be seen with standard 3D computer graphics techniques. As applications, we generated still images and animation sequences for mathematical objects such as the Steiner surface, the four-dimensional torus, and a knotted 2-sphere. The images of surfaces embedded in 4D that have been generated using our methods are unique in the history of mathematical visualization. Finally, I adapted these techniques to visualize volumetric data (3D scalar fields) generated by other scientific applications. Compared to other volume visualization techniques, this method provides a new approach that researchers can use to look at and manipulate certain classes of volume data
    corecore