11,946 research outputs found

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    Offline and Online Optical Flow Enhancement for Deep Video Compression

    Full text link
    Video compression relies heavily on exploiting the temporal redundancy between video frames, which is usually achieved by estimating and using the motion information. The motion information is represented as optical flows in most of the existing deep video compression networks. Indeed, these networks often adopt pre-trained optical flow estimation networks for motion estimation. The optical flows, however, may be less suitable for video compression due to the following two factors. First, the optical flow estimation networks were trained to perform inter-frame prediction as accurately as possible, but the optical flows themselves may cost too many bits to encode. Second, the optical flow estimation networks were trained on synthetic data, and may not generalize well enough to real-world videos. We address the twofold limitations by enhancing the optical flows in two stages: offline and online. In the offline stage, we fine-tune a trained optical flow estimation network with the motion information provided by a traditional (non-deep) video compression scheme, e.g. H.266/VVC, as we believe the motion information of H.266/VVC achieves a better rate-distortion trade-off. In the online stage, we further optimize the latent features of the optical flows with a gradient descent-based algorithm for the video to be compressed, so as to enhance the adaptivity of the optical flows. We conduct experiments on a state-of-the-art deep video compression scheme, DCVC. Experimental results demonstrate that the proposed offline and online enhancement together achieves on average 12.8% bitrate saving on the tested videos, without increasing the model or computational complexity of the decoder side.Comment: 9 pages, 6 figure

    Motion compensated micro-CT reconstruction for in-situ analysis of dynamic processes

    Get PDF
    This work presents a framework to exploit the synergy between Digital Volume Correlation ( DVC) and iterative CT reconstruction to enhance the quality of high-resolution dynamic X-ray CT (4D-mu CT) and obtain quantitative results from the acquired dataset in the form of 3D strain maps which can be directly correlated to the material properties. Furthermore, we show that the developed framework is capable of strongly reducing motion artifacts even in a dataset containing a single 360 degrees rotation

    Fast Subpixel Full Search Motion Estimation

    Full text link
    Motion estimation is one of the most important part in video coding, where only the difference between the current and reference frames will be coded by the encoder.There are many advancements happening in motion estimation techniques. The proposed algorithm provides high precision matching and even reduces the errors during compensation. The algorithm also reduces the computation time when compared to traditional Block matching techniques. It mainly aims at the motion estimation with subpixelaccuracy without interpolation, it is the combination of Block matching and the optical flow method.Fast computation may be evaluated by experimental results while even motion vectors are more accurate reducing the PSNR
    corecore