11,946 research outputs found
Visual Importance-Biased Image Synthesis Animation
Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation
Recommended from our members
A novel filter for block-based motion estimation
Noises, in the form of false motion vectors, cannot be avoided while capturing block motion vectors using block based motion estimation techniques. Similar noises are further introduced when the technique of global motion compensation is applied to obtain 'true' object motion from video sequences, where both the camera and object motions are present. We observe that the performance of the mean and the median filters in removing false motion vectors, for estimating 'true' object motion, is not satisfactory, especially when the size of the object is significantly smaller than the scene. In this paper we introduce a novel filter, named as the Mean-Accumulated-Thresholded (MAT) filter, in order to capture 'true' object motion vectors from video sequences with or without the camera motion (zoom and/or pan). Experimental results on representative standard video sequences are included to establish the superiority of our filter compared with the traditional median and mean filters
Offline and Online Optical Flow Enhancement for Deep Video Compression
Video compression relies heavily on exploiting the temporal redundancy
between video frames, which is usually achieved by estimating and using the
motion information. The motion information is represented as optical flows in
most of the existing deep video compression networks. Indeed, these networks
often adopt pre-trained optical flow estimation networks for motion estimation.
The optical flows, however, may be less suitable for video compression due to
the following two factors. First, the optical flow estimation networks were
trained to perform inter-frame prediction as accurately as possible, but the
optical flows themselves may cost too many bits to encode. Second, the optical
flow estimation networks were trained on synthetic data, and may not generalize
well enough to real-world videos. We address the twofold limitations by
enhancing the optical flows in two stages: offline and online. In the offline
stage, we fine-tune a trained optical flow estimation network with the motion
information provided by a traditional (non-deep) video compression scheme, e.g.
H.266/VVC, as we believe the motion information of H.266/VVC achieves a better
rate-distortion trade-off. In the online stage, we further optimize the latent
features of the optical flows with a gradient descent-based algorithm for the
video to be compressed, so as to enhance the adaptivity of the optical flows.
We conduct experiments on a state-of-the-art deep video compression scheme,
DCVC. Experimental results demonstrate that the proposed offline and online
enhancement together achieves on average 12.8% bitrate saving on the tested
videos, without increasing the model or computational complexity of the decoder
side.Comment: 9 pages, 6 figure
Motion compensated micro-CT reconstruction for in-situ analysis of dynamic processes
This work presents a framework to exploit the synergy between Digital Volume Correlation ( DVC) and iterative CT reconstruction to enhance the quality of high-resolution dynamic X-ray CT (4D-mu CT) and obtain quantitative results from the acquired dataset in the form of 3D strain maps which can be directly correlated to the material properties. Furthermore, we show that the developed framework is capable of strongly reducing motion artifacts even in a dataset containing a single 360 degrees rotation
Fast Subpixel Full Search Motion Estimation
Motion estimation is one of the most important part in video coding, where only the difference between the current and reference frames will be coded by the encoder.There are many advancements happening in motion estimation techniques. The proposed algorithm provides high precision matching and even reduces the errors during compensation. The algorithm also reduces the computation time when compared to traditional Block matching techniques. It mainly aims at the motion estimation with subpixelaccuracy without interpolation, it is the combination of Block matching and the optical flow method.Fast computation may be evaluated by experimental results while even motion vectors are more accurate reducing the PSNR
- …