22 research outputs found

    A bit rate adaptation model for 3D video

    No full text
    NUR YILMAZ, Gokce/0000-0002-0015-9519WOS: 000367895200012Although a significant research effort has been carried out for investigating 3 Dimensional (3D) video display, transmission, coding, etc, the same is not applicable for 3D video adaptation. In addition, ambient illumination, spatial resolution and 3D video content related contexts have not been particularly considered in a hybrid manner for the 3D video adaptation purpose in literature to date. In this paper, an adaptation decision taking technique is designed to predict the bit rate of 3D video sequences to be adapted by a proposed adaptation model. The ambient illumination condition of the viewing environment is considered in these proposed technique and model together with spatial resolution, video quality, and depth perception related contexts of the 3D video. Experimental results derived by the assistance of subjective experiments prove that the proposed model is quite efficient to adapt the 3D video sequences without compromising the 3D video perception of the users

    A novel depth perception prediction metric for advanced multimedia applications

    No full text
    NUR YILMAZ, Gokce/0000-0002-0015-9519WOS: 000501496700011Ubiquitous multimedia applications diffuse our everyday life activities which appreciate their significance about improving our experiences. Therefore, proliferation of the multimedia applications enhancing these experiences needs critical attention of the researchers. Considering this motivation, to overcome the possible barrier of the proliferation of the 3D video-related multimedia applications providing enhanced quality of experience (QoE) to the end users, an objective metric is proposed in this study. The proposed metric tackles the depth perception prediction part reflecting the most important aspect of the 3D video QoE from the user point of view. Considering that the no reference metric type is the most effective one compared to its counterparts, the proposed metric is developed based on this type. In the light of the envision that human visual system-related cues have critical importance on developing accurate metrics, the focus of the proposed metric is directed on the association of the z-direction motion and stereopsis depth cues in the metric development. These cues are derived from the depth map contents having stressed significant depth levels. In addition, the analysis results of the conducted subjective experiments which are currently the "gold standards" for the reliable depth perception prediction are incorporated with the proposed metric. Considering the effective correlation coefficient and root mean square error performance assessment results taken using the proposed metric in comparison to the widely exploited quality assessment metrics in literature, it can be clearly stated that the development of the improved 3D video multimedia applications can be accelerated using it

    Depth Perception Prediction of 3D Video QoE for Future Internet Services

    No full text
    32nd International Conference on Information Networking (ICOIN) -- JAN 10-12, 2018 -- Chiang Mai, THAILANDWOS: 0004688120000293 Dimensional (3D) video Quality of Experience (QoE) metrics are at utmost importance to enable enhancement of Future Internet Services. This enhancement can only be supported when the 3D video is characterized in the most reliable way as possible in these metrics. In light of this fact, a QoE metric including significant depth level and aerial perspective cue to support this reliable way is developed to predict the depth perception of the 3D video. Considering that No Reference (NR) QoE metric type is the most efficient one compared to the other types (i.e., Full Reference (FR) and Reduced Reference (RR)) in terms of transmission requirement, it is used as the metric type to develop the proposed metric. Conducted subjective experiments which are currently the "gold standard" in terms of reliable depth perception assessment are exploited to assess the performance of the proposed metric. Observing the effectiveness of the performance results, it can be clearly concluded that the advancement of the 3D video communication technologies can be ensured to assist Future Internet Services in a timely fashion.Korean Inst Informat Scientists & Engineers Informat Networking Soc, IEEE Comp Soc, IEEEScientific and Technological Research Council of TurkeyTurkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) [114E551]This work has been supported by the Scientific and Technological Research Council of Turkey, Project Number: 114E551

    A DEPTH PERCEPTION EVALUATION METRIC FOR IMMERSIVE 3D VIDEO SERVICES

    No full text
    3DTV Conference - The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) -- JUN 07-09, 2017 -- Copenhagen, DENMARKWOS: 000428142100024Burgeoning advances in 3 Dimensional (3D) video services provide a big leap on the proliferation of the investigations for developing reliable and competent perceptual 3D video Quality of Experience (QoE) metrics. This proliferation can only be supported by exploiting key features characterizing 3D video nature in these investigations. In this paper, a Reduced Reference (RR) metric is developed considering that the spatial resolution and perceptually significant depth level are two effective features for efficiently evaluating depth perception of the 3D video. In order to determine the perceptually significant depth levels in the depth map sequences, abstraction filter is exploited in the development of the proposed metric. Owing to the fact that the depth perception significantly differs for the depth map sequences having dissimilar relative depth levels, this feature is also incorporated with the proposed metric through normalized standard deviation. Structural SIMilarity metric (SSIM) is utilized to predict the depth perception degraded with the change in the perceptually important levels of the compressed depth maps having dissimilar spatial resolutions and relative depth levels. The performance assessment of the proposed RR metric proves the effectiveness of the proposed metric for ensuring immersive 3D video services.IEEE, Aalborg Univ, IEEE Danish Sect, IEEE Greek CAS SSC Joint Chapter, IEEE Finnish SP CAS Joint ChapterScientific and Technological Research Council of TurkeyTurkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) [114E551]This work has been supported by the Scientific and Technological Research Council of Turkey, Project Number: 114E551

    Scene Detection via Depth Maps Of 3 Dimensional Videos

    No full text
    25th Signal Processing and Communications Applications Conference (SIU) -- MAY 15-18, 2017 -- Antalya, TURKEYWOS: 000413813100154Scene detection via processing of multimedia data is a significant research area for the advancement of the video technologies and applications. Currently, the scene detection is mostly performed manually. Thus, it is time consuming and costly. Therefore, it is important to develop algorithms that can automatically segment scenes to support the advancement of these technologies and applications. With the wide-spread utilization of the 3 Dimensional (3D) videos, researchers working in the field of the video scene detection start using them in this field as well. However, there is still a gap in the application of the scene detection algorithms to Depth Maps (DMs) that are a part of the 3D video and important for temporal video scene detection. In this study, dominant clusters and K-means method is proposed to detect the temporal 3D video segments using the DMs. The experimental studies performed using the scene detection method present that the video scenes can be edited efficiently without human assistance. Moreover, unlike similar studies in the literature, the proposed method can provide successful results on video sequences thanks to the dominant clusters and the K-means clustering approach utilized.Turk Telekom, Arcelik A S, Aselsan, ARGENIT, HAVELSAN, NETAS, Adresgezgini, IEEE Turkey Sect, AVCR Informat Technologies, Cisco, i2i Syst, Integrated Syst & Syst Design, ENOVAS, FiGES Engn, MS Spektral, Istanbul Teknik Uni

    No-Reference Evaluation of 3 Dimensional Video Quality Using Spatial and Frequency Domain Components

    No full text
    26th IEEE Signal Processing and Communications Applications Conference (SIU) -- MAY 02-05, 2018 -- Izmir, TURKEYWOS: 000511448500176Video Quality Assessment (VQA) plays an important role both for evaluating the performance of the transmitter-receiver system and for delivering the video in an efficient manner via the feedback it provides to the transmitter side. Full Reference (FR) VQA metrics currently utilized in the literature are not too efficient during the applications due to the requirement of the original video sequence at the receiver side. Therefore, the tendency of the researchers is recently on to develop Reduced Reference (RR) or No-Reference (NR) VKD metrics. In this paper, a NR VKD metric considering spatial and frequency domain components of the color and depth map based 3 Dimensional (3D) video important for Human Visual System (HVS) is developed. Canny operator which is an efficient algorithm to extract edge information is used to obtain the components in the spatial domain. Discrete Cosine Transform (DCT) is exploited to obtain the components in the frequency domain. The efficient results obtained show that the proposed algorithm is capable of superseding the FR metrics existing in the literature.IEEE, Huawei, Aselsan, NETAS, IEEE Turkey Sect, IEEE Signal Proc Soc, IEEE Commun Soc, ViSRATEK, Adresgezgini, Rohde & Schwarz, Integrated Syst & Syst Design, Atilim Univ, Havelsan, Izmir Katip Celebi Uni

    A depth perception evaluation metric for immersive user experience towards 3D multimedia services

    No full text
    NUR YILMAZ, Gokce/0000-0002-0015-9519WOS: 000468605800007The interest of users towards three-dimensional (3D) video is gaining momentum due to the recent breakthroughs in 3D video entertainment, education, network, etc. technologies. In order to speed up the advancement of these technologies, monitoring quality of experience of the 3D video, which focuses on end user's point of view rather than service-oriented provisions, becomes a central concept among the researchers. Thanks to the stereoscopic viewing ability of human visual system (HVS), the depth perception evaluation of the 3D video can be considered as one of the most critical parts of this central concept. Due to the lack of efficiently and widely utilized objective metrics in literature, the depth perception assessment can currently only be ensured by cost and time-wise troublesome subjective measurements. Therefore, a no-reference objective metric, which is highly effective especially for on the fly depth perception assessment, is developed in this paper. Three proposed algorithms (i.e., Z direction motion, structural average depth and depth deviation) significant for the HVS to perceive the depth of the 3D video are integrated together while developing the proposed metric. Considering the outcomes of the proposed metric, it can be clearly stated that the provision of better 3D video experience to the end users can be accelerated in a timely fashion for the Future Internet multimedia services

    Blind video quality assessment via spatiotemporal statistical analysis of adaptive cube size 3D-DCT coefficients

    No full text
    WOS: 000526816700005There is an urgent need for a robust video quality assessment (VQA) model that can efficiently evaluate the quality of a video content varying in terms of the distortion and content type in the absence of the reference video. Considering this need, a novel no reference (NR) model relying on the spatiotemporal statistics of the distorted video in a three-dimensional (3D)-discrete cosine transform (DCT) domain is proposed in this study. While developing the model, as the first contribution, the video contents are adaptively segmented into the cubes of different sizes and spatiotemporal contents in line with the human visual system (HVS) properties. Then, the 3D-DCT is applied to these cubes. Following that, as the second contribution, different efficient features (i.e. spectral behaviour, energy variation, distances between spatiotemporal frequency bands, and DC variation) associated with the contents of these cubes are extracted. After that, these features are associated with the subjective experimental results obtained from the EPFL-PoliMi video database using the linear regression analysis for building the model. The evaluation results present that the proposed model, unlike many top-performing NR-VQA models (e.g. V-BLIINDS, VIIDEO, and SSEQ), achieves high and stable performance across the videos with different contents and distortions

    DEPTH PERCEPTION PREDICTION OF 3D VIDEO FOR ENSURING ADVANCED MULTIMEDIA SERVICES

    No full text
    3DTV-Conference - The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) -- JUN 03-05, 2018 -- Helsinki, FINLANDWOS: 000454903900010A key role in the advancement of 3 Dimensional TV services is played by the development of 3D video quality metrics used for the assessment of the perceived quality. Moreover, this key role can only be supported when the features associated with the 3D video nature is reliably and efficiently characterized in these metrics. In this study, z-direction motion incorporated with significant depth levels in depth map sequences are considered as the main characterizations of the 3D nature. The 3D video quality metrics can be classified into three categories based on the need for the reference video during the assessment process at the user end: Full Reference (FR), Reduced Reference (RR) and No Reference (NR). In this study we propose a NR quality metric, PNRM, suitable for on-the-fly 3D video services. In order to evaluate the reliability and effectiveness of the proposed metric, subjective experiments are conducted in this paper. Observing the high correlation with the subjective experimental results, it can be clearly stated that the proposed metric is able to mimic the Human Visual System (HVS).Scientific and Technological Research Council of TurkeyTurkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) [114E551]This work has been supported by the Scientific and Technological Research Council of Turkey, Project Number: 114E551

    An Abstraction and Structural Information Based Depth Perception Evaluation Metric

    No full text
    25th Signal Processing and Communications Applications Conference (SIU) -- MAY 15-18, 2017 -- Antalya, TURKEYWOS: 000413813100585Developing reliable and efficient 3 Dimensional (3D) video depth perception evaluation metrics is currently a trending research topic for supporting the advancement of the 3D video services. This support can be proliferated by utilizing effective 3D video features while modeling these metrics. In this study, a Reduced Reference (RR) depth perception evaluation metric using significant depth level and structural information as effective 3D video features is developed. The significant depth level and structural information in the Depth Maps (DM) are determined using abstraction filter and Canny edge detection algorithm, respectively. The performance assessment results of the proposed RR metric present that it is quite effective for ensuring advanced 3D video services.Turk Telekom, Arcelik A S, Aselsan, ARGENIT, HAVELSAN, NETAS, Adresgezgini, IEEE Turkey Sect, AVCR Informat Technologies, Cisco, i2i Syst, Integrated Syst & Syst Design, ENOVAS, FiGES Engn, MS Spektral, Istanbul Teknik Uni
    corecore