12,065 research outputs found

    Creating Simplified 3D Models with High Quality Textures

    Get PDF
    This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures. This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model. The proposed method is implemented in real-time by means of GPU parallel processing. Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model. Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Page 1 -

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Rendering process and methods for creating stylized and photorealistic computer-generated 3D characters for video games development with their comparison

    Get PDF
    Computer Graphics combined with modern technology and complex rendering solutions with 3D visualisation enabled digital art could become the most widespread form of visual art in the world. The video games industry and movie production with computer-generated imagery opened a new field for graphic technology using strong rendering infrastructure. Man has been the main point of interest of art for millennia until today. There are two basic ways of depicting characters: stylized and photorealistic. Over the years the process of creating digital signs is not linear, which means that in most cases it is necessary to jump between steps. The workflow will depend on two key factors, which are the style and the use of 3D characters. More complex designs require more attention to detail, which affects the process and length of the workflow. Digital 3D characters can be used in video games, movies, visual graphics, 3D printing. Characters used in games will have significantly fewer polygons than movie-ready characters, while 3D-printable characters will have their own set of rules that make them printable. Complex graphics combined with strong computer power and high-end performances offer huge progress in digital art. Regardless of the different styles and use cases, there are certain workflow steps that stylized and photorealistic characters share, namely: design and references, retopology, UV unwrapping, texturing and materials, scene setup, lighting, rendering, and post-production processes. This paper compares the workflows between stylized and photorealistic characters and their advantages and disadvantages

    Realtime projective multi-texturing of pointclouds and meshes for a realistic street-view web navigation

    Get PDF
    International audienceStreet-view web applications have now gained widespread popularity. Targeting the general public, they offer ease of use, but while they allow efficient navigation from a pedestrian level, the immersive quality of such renderings is still low. The user is usually stuck at specific positions and transitions bring out artefacts, in particular parallax and aliasing. We propose a method to enhance the realism of street view navigation systems using a hybrid rendering based on realtime projective texturing on meshes and pointclouds with occlusion handling, requiring extremely minimized pre-processing steps allowing fast data update, progressive streaming (mesh-based approximation, with point cloud details) and unaltered raw data precise visualization

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end

    Incorporating interactive 3-dimensional graphics in astronomy research papers

    Full text link
    Most research data collections created or used by astronomers are intrinsically multi-dimensional. In contrast, all visual representations of data presented within research papers are exclusively 2-dimensional. We present a resolution of this dichotomy that uses a novel technique for embedding 3-dimensional (3-d) visualisations of astronomy data sets in electronic-format research papers. Our technique uses the latest Adobe Portable Document Format extensions together with a new version of the S2PLOT programming library. The 3-d models can be easily rotated and explored by the reader and, in some cases, modified. We demonstrate example applications of this technique including: 3-d figures exhibiting subtle structure in redshift catalogues, colour-magnitude diagrams and halo merger trees; 3-d isosurface and volume renderings of cosmological simulations; and 3-d models of instructional diagrams and instrument designs.Comment: 18 pages, 7 figures, submitted to New Astronomy. For paper with 3-dimensional embedded figures, see http://astronomy.swin.edu.au/s2plot/3dpd
    corecore