8 research outputs found

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    Spatially-encoded far-field representations for interactive walkthroughs

    Get PDF

    Alogorithms for fast implementation of high efficiency video coding

    Get PDF
    Recently, there is higher demand for video content in multimedia communication, which leads to increased requirements for storage and bandwidth posed to internet service providers. Due to this, it became necessary for the telecommunication standardization sector of the International Telecommunication Union (ITU-T) to launch a new video compression standard that would address the twin challenges of lowering both digital file sizes in storage media and transmission bandwidths in networks. The High Efficiency Video Compression (HEVC) also known as H.265 standard was launched in November 2013 to address these challenges. This new standard was able to cut down, by 50%, on existing media file sizes and bandwidths but its computational complexity leads to about 400% delay in HEVC video encoding. This study proposes a solution to the above problem based on three key areas of the HEVC. Firstly, two fast motion estimation algorithms are proposed based on triangle and pentagon structures to implement motion estimation and compensation in a shorter time. Secondly, an enhanced and optimized inter-prediction mode selection is proposed. Thirdly, an enhanced intra-prediction mode scheme with reduced latency is suggested. Based on the test model of the HEVC reference software, each individual algorithm manages to reduce the encoding time across all video classes by an average of 20-30%, with a best reduction of 70%, at a negligible loss in coding efficiency and video quality degradation. In practice, these algorithms would be able to enhance the performance of the HEVC compression standard, and enable higher resolution and higher frame rate video encoding as compared to the stateof- the-art technique

    Model-based motion estimation for synthetic animations

    Full text link
    One approach to performing motion estimation on syn-thetic animations is to treat them as video sequences and use standard image-based motion estimation meth-ods. Alternatively, we can take advantage of informa-tion used in rendering the animation to guide the motion estimation algorithm. This information includes the 3D movements of the objects in the scene and the projec-tion transformations from 3D world space into screen space. In this paper we examine how to use this high level object motion information to perform fast, accu-rate block-based motion estimation for synthetic anima-tions. The optical ow eld is a 2D vector eld describ-ing the translational motion of each pixel from frame to frame. Our motion estimation algorithm rst com-putes the optical ow eld, based on the object motion information. We then combine the per-pixel motion in-formation for a block of pixels to create a single 2D projective matrix that best encodes the motion of all the pixels in the block. The entries of the 2D matrix are determined using a least squares formulation. Our algo-rithms are more accurate and much faster in algorithmic complexity than many image-based motion estimation algorithms.

    Image synthesis based on a model of human vision

    Get PDF
    Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision

    Multiple viewpoint rendering for three-dimensional displays

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 159-164).Michael W. Halle.Ph.D

    Abstract Accelerated MPEG Compression of Dynamic Polygonal Scenes

    No full text
    This paper describes a methodology for using the matrix-vector multiply and scan conversion hardware present in many graphics workstations to rapidly approximate the optical flow in a scene. The optical flow is a 2-dimensional vector field describing the onscreen motion of each pixel. An application of the optical flow to MPEG compression is described which results in improved compression with minimal overhead
    corecore