37 research outputs found

    Towards Blood Flow in the Virtual Human: Efficient Self-Coupling of HemeLB

    Get PDF
    Many scientific and medical researchers are working towards the creation of a virtual human - a personalised digital copy of an individual - that will assist in a patient's diagnosis, treatment and recovery. The complex nature of living systems means that the development of this remains a major challenge. We describe progress in enabling the HemeLB lattice Boltzmann code to simulate 3D macroscopic blood flow on a full human scale. Significant developments in memory management and load balancing allow near linear scaling performance of the code on hundreds of thousands of computer cores. Integral to the construction of a virtual human, we also outline the implementation of a self-coupling strategy for HemeLB. This allows simultaneous simulation of arterial and venous vascular trees based on human-specific geometries.Comment: 30 pages, 10 figures, To be published in Interface Focus (https://royalsocietypublishing.org/journal/rsfs

    Proceedings - IEEE International Conference on Multimedia and Expo

    Full text link
    We propose a method to obtain a high quality motion field from decoded HEVC motion. We use the block motion vectors to establish a sparse set of correspondences, and then employ an affine, edge-preserving interpolation of correspondences (EPIC) to obtain a dense optical flow. Experimental results on a variety of sequences coded at a range of QP values show that the proposed HEVC-EPIC is over five times as fast as the original EPIC flow, which uses a sophisticated correspondence estimator, while only slightly decreasing the flow accuracy. The proposed work opens the door to leveraging HEVC motion into video enhancement and analysis methods. To provide some evidence of what can be achieved, we show that when used as input to a framerate upsampling scheme, the average Y-PSNR of the interpolated frames obtained using HEVC-EPIC motion is slightly lower (0.2dB) than when original EPIC flow is used, with hardly any visible differences

    Multimedia Signal Processing (MMSP), 2016 IEEE 18th International Workshop

    No full text
    This paper continues our work on occlusion-aware temporal frame interpolation (TFI) that employs piecewise-smooth motion with sharp motion boundaries. In this work, we propose a triangular mesh sparsification algorithm, which allows to trade off computational complexity with reconstruction quality. Furthermore, we propose a method to create a background motion layer in regions that get disoccluded between the two reference frames, which is used to get temporally consistent interpolations among frames interpolated between the two reference frames. Experimental results on a large data set show the proposed mesh sparsification is able to reduce the processing time by 75%, with a minor drop in PSNR of 0.02 dB. The proposed TFI scheme outperforms various state-of-the-art TFI methods in terms of quality of the interpolated frames, while having the lowest processing times. Further experiments on challenging synthetic sequences highlight the temporal consistency in traditionally difficult regions of disocclusion

    Temporal Frame Interpolation with Motion-divergence-guided Occlusion Handling

    Full text link
    We present a high quality temporal frame interpolation(TFI) method that employs piecewise-smooth motion, andhandles (dis-)occluded regions using the observation that motiondiscontinuities travel with the foreground object. We derive a“motion discontinuity” likelihood map from the divergence of amotion field between the input frames. Motion which is modelledat the reference frame is mapped to the target frame using acellular-affine mapping strategy – a process during which regionsof disocclusion are readily observed. This information is then usedto guide the occlusion-aware, bidirectional frame interpolationprocess. Furthermore, we propose two computationally inexpensivetexture optimizations that selectively improve the qualityof the interpolated frames in regions around moving objects.The scheme produces very high quality interpolated frames, andoutperforms current high-quality state-of-the-art TFI schemes by2-2.5dB; the method works with a very low-complexity motionestimation scheme, and runs orders of magnitudes faster thanits competitors

    2019 Picture Coding Symposium, PCS 2019

    Full text link
    For efficient compression of lightfields that involve many views, it has been found preferable to explicitly communicate disparity/depth information at only a small subset of the view locations. In this study, we focus solely on inter-view prediction, which is fundamental to multi-view imagery compression, and itself depends upon the synthesis of disparity at new view locations. Current HDCA standardization activities consider a framework known as WaSP, that hierarchically predicts views, independently synthesizing the required disparity maps at the reference views for each prediction step. A potentially better approach is to progressively construct a unified multi-layered base-model for consistent disparity synthesis across many views. This paper improves significantly upon an existing base-model approach, demonstrating superior performance to WaSP. More generally, the paper investigates the implications of texture warping and disparity synthesis methods
    corecore