3,631 research outputs found

    An Effective Ultrasound Video Communication System Using Despeckle Filtering and HEVC

    Get PDF
    The recent emergence of the high-efficiency video coding (HEVC) standard promises to deliver significant bitrate savings over current and prior video compression standards, while also supporting higher resolutions that can meet the clinical acquisition spatiotemporal settings. The effective application of HEVC to medical ultrasound necessitates a careful evaluation of strict clinical criteria that guarantee that clinical quality will not be sacrificed in the compression process. Furthermore, the potential use of despeckle filtering prior to compression provides for the possibility of significant additional bitrate savings that have not been previously considered. This paper provides a thorough comparison of the use of MPEG-2, H.263, MPEG-4, H.264/AVC, and HEVC for compressing atherosclerotic plaque ultrasound videos. For the comparisons, we use both subjective and objective criteria based on plaque structure and motion. For comparable clinical video quality, experimental evaluation on ten videos demonstrates that HEVC reduces bitrate requirements by as much as 33.2% compared to H.264/AVC and up to 71% compared to MPEG-2. The use of despeckle filtering prior to compression is also investigated as a method that can reduce bitrate requirements through the removal of higher frequency components without sacrificing clinical quality. Based on the use of three despeckle filtering methods with both H.264/AVC and HEVC, we find that prior filtering can yield additional significant bitrate savings. The best performing despeckle filter (DsFlsmv) achieves bitrate savings of 43.6% and 39.2% compared to standard nonfiltered HEVC and H.264/AVC encoding, respectively

    Multi-Frame Quality Enhancement for Compressed Video

    Full text link
    The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, ignoring the similarity between consecutive frames. In this paper, we investigate that heavy quality fluctuation exists across compressed video frames, and thus low quality frames can be enhanced using the neighboring high quality frames, seen as Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as a first attempt in this direction. In our approach, we firstly develop a Support Vector Machine (SVM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are as the input. The MF-CNN compensates motion between the non-PQF and PQFs through the Motion Compensation subnet (MC-subnet). Subsequently, the Quality Enhancement subnet (QE-subnet) reduces compression artifacts of the non-PQF with the help of its nearest PQFs. Finally, the experiments validate the effectiveness and generality of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video. The code of our MFQE approach is available at https://github.com/ryangBUAA/MFQE.gitComment: to appear in CVPR 201

    Can we ID from CCTV? Image quality in digital CCTV and face identification performance

    Get PDF
    CCTV is used for an increasing number Of purposes, and the new generation of digital systems can be tailored to serve a wide range of security requirements. However, configuration decisions are often made without considering specific task requirements, e.g. the video quality needed for reliable person identification. Our Study investigated the relationship between video quality and the ability of untrained viewers to identify faces from digital CCTV images. The task required 80 participants to identify 64 faces belonging to 4 different ethnicities. Participants compared face images taken from a high quality photographs and low quality CCTV stills, which were recorded at 4 different video quality bit rates (32, 52, 72 and 92 Kbps). We found that the number of correct identifications decreased by 12 (similar to 18%) as MPEG-4 quality decreased from 92 to 32 Kbps, and by 4 (similar to 6%) as Wavelet video quality decreased from 92 to 32 Kbps. To achieve reliable and effective face identification, we recommend that MPEG-4 CCTV systems should be used over Wavelet, and video quality should not be lowered below 52 Kbps during video compression. We discuss the practical implications of these results for security, and contribute a contextual methodology for assessing CCTV video quality

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    A reduced-reference perceptual image and video quality metric based on edge preservation

    Get PDF
    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al

    Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric

    No full text
    Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively

    Perceived quality of full HD video - subjective quality assessment

    Get PDF
    In recent years, an interest in multimedia services has become a global trend and this trend is still rising. The video quality is a very significant part from the bundle of multimedia services, which leads to a requirement for quality assessment in the video domain. Video quality of a streamed video across IP networks is generally influenced by two factors “transmission link imperfection and efficiency of compression standards. This paper deals with subjective video quality assessment and the impact of the compression standards H.264, H.265 and VP9 on perceived video quality of these compression standards. The evaluation is done for four full HD sequences, the difference of scenes is in the content“ distinction is based on Spatial (SI) and Temporal (TI) Index of test sequences. Finally, experimental results follow up to 30% bitrate reducing of H.265 and VP9 compared with the reference H.264
    corecore