82 research outputs found

    Integration of Multisensorial Stimuli and Multimodal Interaction in a Hybrid 3DTV System

    Get PDF
    This article proposes the integration of multisensorial stimuli and multimodal interaction components into a sports multimedia asset under two dimensions: immersion and interaction. The first dimension comprises a binaural audio system and a set of sensory effects synchronized with the audiovisual content, whereas the second explores interaction through the insertion of interactive 3D objects into the main screen and on-demand presentation of additional information in a second touchscreen. We present an end-to-end solution integrating these components into a hybrid (internet-broadcast) television system using current 3DTV standards. Results from an experimental study analyzing the perceived quality of these stimuli and their influence on the Quality of Experience are presented

    Quality of experience for 3-d immersive media streaming

    Get PDF
    Recent advances in media capture and processing technologies have enabled new forms of true 3-D media content that increase the degree of user immersion. The demand for more engaging forms of entertainment means that content distributors and broadcasters need to fine-tune their delivery mechanisms over the Internet as well as develop new models for quantifying and predicting user experience of these new forms of content. In the work described in this paper, we undertake one of the first studies into the quality of experience (QoE) of real-time 3-D media content streamed to virtual reality (VR) headsets for entertainment purposes, in the context of game spectating. Our focus is on tele-immersive media that embed real users within virtual environments of interactive games. A key feature of engaging and realistic experiences in full 3-D media environments, is allowing users unrestricted viewpoints. However, this comes at the cost of increased network bandwidth and the need of limiting network effects in order to transmit a realistic, real-time representation of the participants. The visual quality of 3-D media is affected by geometry and texture parameters while the temporal aspects of smooth movement and synchronization are affected by lag introduced by network transmission effects. In this paper, we investigate varying network conditions for a set of tele-immersive media sessions produced in a range of visual quality levels. Further, we investigate user navigation issues that inhibit free viewpoint VR spectating of live 3-D media. After reporting on a study with multiple users we analyze the results and assess the overall QoE with respect to a range of visual quality and latency parameters. We propose a neural network QoE prediction model for 3-D media, constructed from a combination of visual and network parameters

    Human-centric quality management of immersive multimedia applications

    Get PDF
    Augmented Reality (AR) and Virtual Reality (VR) multimodal systems are the latest trend within the field of multimedia. As they emulate the senses by means of omni-directional visuals, 360 degrees sound, motion tracking and touch simulation, they are able to create a strong feeling of presence and interaction with the virtual environment. These experiences can be applied for virtual training (Industry 4.0), tele-surgery (healthcare) or remote learning (education). However, given the strong time and task sensitiveness of these applications, it is of great importance to sustain the end-user quality, i.e. the Quality-of-Experience (QoE), at all times. Lack of synchronization and quality degradation need to be reduced to a minimum to avoid feelings of cybersickness or loss of immersiveness and concentration. This means that there is a need to shift the quality management from system-centered performance metrics towards a more human, QoE-centered approach. However, this requires for novel techniques in the three areas of the QoE-management loop (monitoring, modelling and control). This position paper identifies open areas of research to fully enable human-centric driven management of immersive multimedia. To this extent, four main dimensions are put forward: (1) Task and well-being driven subjective assessment; (2) Real-time QoE modelling; (3) Accurate viewport prediction; (4) Machine Learning (ML)-based quality optimization and content recreation. This paper discusses the state-of-the-art, and provides with possible solutions to tackle the open challenges

    Quality of experience in telemeetings and videoconferencing: a comprehensive survey

    Get PDF
    Telemeetings such as audiovisual conferences or virtual meetings play an increasingly important role in our professional and private lives. For that reason, system developers and service providers will strive for an optimal experience for the user, while at the same time optimizing technical and financial resources. This leads to the discipline of Quality of Experience (QoE), an active field originating from the telecommunication and multimedia engineering domains, that strives for understanding, measuring, and designing the quality experience with multimedia technology. This paper provides the reader with an entry point to the large and still growing field of QoE of telemeetings, by taking a holistic perspective, considering both technical and non-technical aspects, and by focusing on current and near-future services. Addressing both researchers and practitioners, the paper first provides a comprehensive survey of factors and processes that contribute to the QoE of telemeetings, followed by an overview of relevant state-of-the-art methods for QoE assessment. To embed this knowledge into recent technology developments, the paper continues with an overview of current trends, focusing on the field of eXtended Reality (XR) applications for communication purposes. Given the complexity of telemeeting QoE and the current trends, new challenges for a QoE assessment of telemeetings are identified. To overcome these challenges, the paper presents a novel Profile Template for characterizing telemeetings from the holistic perspective endorsed in this paper

    "I'm the Jedi!" - A Case Study of User Experience in 3D Tele-immersive Gaming

    Full text link
    Abstract—In this paper, we present the results from a quantitative and qualitative study of distributed gaming in 3D tele-immersive (3DTI) environments. We explore the Qual-ity of Experience (QoE) of users in the new cyber-physical gaming environment. Guided by a theoretical QoE model, we conducted a case study and evaluated the impact of various Quality of Service (QoS) metrics (e.g., end-to-end delay, visual quality, etc.) on 3DTI gaming experience. We also identified a number of non-technical factors that are not captured by the original theoretical model, such as age, social interaction, and physical setup. Our analysis highlights new implications for the next-generation gaming system design, as well as a more comprehensive conceptual framework that captures non-technical influences for user experience in such environments

    Immersive interconnected virtual and augmented reality : a 5G and IoT perspective

    Get PDF
    Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality

    Semantics-aware content delivery framework for 3D Tele-immersion

    Get PDF
    3D Tele-immersion (3DTI) technology allows full-body, multimodal interaction among geographically dispersed users, which opens a variety of possibilities in cyber collaborative applications such as art performance, exergaming, and physical rehabilitation. However, with its great potential, the resource and quality demands of 3DTI rise inevitably, especially when some advanced applications target resource-limited computing environments with stringent scalability demands. Under these circumstances, the tradeoffs between 1) resource requirements, 2) content complexity, and 3) user satisfaction in delivery of 3DTI services are magnified. In this dissertation, we argue that these tradeoffs of 3DTI systems are actually avoidable when the underlying delivery framework of 3DTI takes the semantic information into consideration. We introduce the concept of semantic information into 3DTI, which encompasses information about the three factors: environment, activity, and user role in 3DTI applications. With semantic information, 3DTI systems are able to 1) identify the characteristics of its computing environment to allocate computing power and bandwidth to delivery of prioritized contents, 2) pinpoint and discard the dispensable content in activity capturing according to properties of target application, and 3) differentiate contents by their contributions on fulfilling the objectives and expectation of user’s role in the application so that the adaptation module can allocate resource budget accordingly. With these capabilities we can change the tradeoffs into synergy between resource requirements, content complexity, and user satisfaction. We implement semantics-aware 3DTI systems to verify the performance gain on the three phases in 3DTI systems’ delivery chain: capturing phase, dissemination phase, and receiving phase. By introducing semantics information to distinct 3DTI systems, the efficiency improvements brought by our semantics-aware content delivery framework are validated under different application requirements, different scalability bottlenecks, and different user and application models. To sum up, in this dissertation we aim to change the tradeoff between requirements, complexity, and satisfaction in 3DTI services by exploiting the semantic information about the computing environment, the activity, and the user role upon the underlying delivery systems of 3DTI. The devised mechanisms will enhance the efficiency of 3DTI systems targeting on serving different purposes and 3DTI applications with different computation and scalability requirements

    Toward hyper-realistic and interactive social VR experiences in live TV scenarios

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Social Virtual Reality (VR) allows multiple distributed users getting together in shared virtual environments to socially interact and/or collaborate. This article explores the applicability and potential of Social VR in the broadcast sector, focusing on a live TV show use case. For such a purpose, a novel and lightweight Social VR platform is introduced. The platform provides three key outstanding features compared to state-of-the-art solutions. First, it allows a real-time integration of remote users in shared virtual environments, using realistic volumetric representations and affordable capturing systems, thus not relying on the use of synthetic avatars. Second, it supports a seamless and rich integration of heterogeneous media formats, including 3D scenarios, dynamic volumetric representation of users and (live/stored) stereoscopic 2D and 180º/360º videos. Third, it enables low-latency interaction between the volumetric users and a video-based presenter (Chroma keying), and a dynamic control of the media playout to adapt to the session’s evolution. The production process of an immersive TV show to be able to evaluate the experience is also described. On the one hand, the results from objective tests show the satisfactory performance of the platform. On the other hand, the promising results from user tests support the potential impact of the presented platform, opening up new opportunities in the broadcast sector, among others.This work has been partially funded by the European Union’s Horizon 2020 program, under agreement nº 762111 (VRTogether project), and partially by ACCIÓ, under agreement COMRDI18-1-0008 (ViVIM project). Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – Incorporación grant (reference IJCI-2017-34611). The authors would also like to thank the EU H2020 VRTogether project consortium for their relevant and valuable contributions.Peer ReviewedPostprint (author's final draft

    Inter-Destination Multimedia Synchronization; Schemes, Use Cases and Standardization

    Full text link
    Traditionally, the media consumption model has been a passive and isolated activity. However, the advent of media streaming technologies, interactive social applications, and synchronous communications, as well as the convergence between these three developments, point to an evolution towards dynamic shared media experiences. In this new model, geographically distributed groups of consumers, independently of their location and the nature of their end-devices, can be immersed in a common virtual networked environment in which they can share multimedia services, interact and collaborate in real-time within the context of simultaneous media content consumption. In most of these multimedia services and applications, apart from the well-known intra and inter-stream synchronization techniques that are important inside the consumers playout devices, also the synchronization of the playout processes between several distributed receivers, known as multipoint, group or Inter-destination multimedia synchronization (IDMS), becomes essential. Due to the increasing popularity of social networking, this type of multimedia synchronization has gained in popularity in recent years. Although Social TV is perhaps the most prominent use case in which IDMS is useful, in this paper we present up to 19 use cases for IDMS, each one having its own synchronization requirements. Different approaches used in the (recent) past by researchers to achieve IDMS are described and compared. As further proof of the significance of IDMS nowadays, relevant organizations (such as ETSI TISPAN and IETF AVTCORE Group) efforts on IDMS standardization (in which authors have been and are participating actively), defining architectures and protocols, are summarized.This work has been financed, partially, by Universitat Politecnica de Valencia (UPV), under its R&D Support Program in PAID-05-11-002-331 Project and in PAID-01-10, and by TNO, under its Future Internet Use Research & Innovation Program. The authors also want to thank Kevin Gross for providing some of the use cases included in Sect. 1.2.Montagud, M.; Boronat Segui, F.; Stokking, H.; Van Brandenburg, R. (2012). Inter-Destination Multimedia Synchronization; Schemes, Use Cases and Standardization. Multimedia Systems. 18(6):459-482. https://doi.org/10.1007/s00530-012-0278-9S459482186Kernchen, R., Meissner, S., Moessner, K., Cesar, P., Vaishnavi, I., Boussard, M., Hesselman, C.: Intelligent multimedia presentation in ubiquitous multidevice scenarios. IEEE Multimedia 17(2), 52–63 (2010)Vaishnavi, I., Cesar, P., Bulterman, D., Friedrich, O., Gunkel, S., Geerts, D.: From IPTV to synchronous shared experiences challenges in design: distributed media synchronization. Signal Process Image Commun 26(7), 370–377 (2011)Geerts, D., Vaishnavi, I., Mekuria, R., Van Deventer, O., Cesar, P.: Are we in sync?: synchronization requirements for watching on-line video together, CHI ‘11, New York, USA (2011)Boronat, F., Lloret, J., García, M.: Multimedia group and inter-stream synchronization techniques: a comparative study. Inf. Syst. 34(1), 108–131 (2009)Chen, M.: A low-latency lip-synchronized videoconferencing system. In: SIGCHI Conference on Human Factors in Computing Systems, CHI’03, ACM, pp. 464–471, New York (2003)Ishibashi, Y., Tasaka, S., Ogawa, H.: Media synchronization quality of reactive control schemes. IEICE Trans. Commun. E86-B(10), 3103–3113 (2003)Ademoye, O.A., Ghinea, G.: Synchronization of olfaction-enhanced multimedia. IEEE Trans. Multimedia 11(3), 561–565 (2009)Cesar, P., Bulterman, D.C.A., Jansen, J., Geerts, D., Knoche, H., Seager, W.: Fragment, tag, enrich, and send: enhancing social sharing of video. ACM Trans. Multimedia Comput. Commun. Appl. 5(3), Article 19, 27 pages (2009)Van Deventer, M.O., Stokking, H., Niamut, O.A., Walraven, F.A., Klos, V.B.: Advanced Interactive Television Service Require Synchronization, IWSSIP 2008. Bratislava, June (2008)Premchaiswadi, W., Tungkasthan, A., Jongsawat, N.: Enhancing learning systems by using virtual interactive classrooms and web-based collaborative work. In: Proceedings of the IEEE Education Engineering Conference (EDUCON 2010), pp. 1531–1537. Madrid, Spain (2010)Diot, C., Gautier, L.: A distributed architecture for multiplayer interactive applications on the internet. IEEE Netw 13(4), 6–15 (1999)Mauve, M., Vogel, J., Hilt, V., Effelsberg, W.: Local-lag and timewarp: providing consistency for replicated continuous applications. IEEE Trans. Multimedia 6(1), 45–57 (2004)Hosoya, K., Ishibashi, Y., Sugawara, S., Psannis, K.E.: Group synchronization control considering difference of conversation roles. In: IEEE 13th International Symposium on Consumer Electronics, ISCE ‘09, pp. 948–952 (2009)Roccetti, M., Ferretti, S., Palazzi, C.: The brave new world of multiplayer online games: synchronization issues with smart solution. In: 11th IEEE Symposium on Object Oriented Real-Time Distributed Computing (ISORC), pp. 587–592 (2008)Ott, D.E., Mayer-Patel, K.: An open architecture for transport-level protocol coordination in distributed multimedia applications. ACM Trans. Multimedia Comput. Commun. Appl. 3(3), 17 (2007)Boronat, F., Montagud, M., Guerri, J.C.: Multimedia group synchronization approach for one-way cluster-to-cluster applications. In: IEEE 34th Conference on Local Computer Networks, LCN 2009, pp. 177–184, Zürich (2009)Boronat, F., Montagud, M., Vidal, V.: Smooth control of adaptive media playout to acquire IDMS in cluster-based applications. In: IEEE LCN 2011, pp. 617–625, Bonn (2011)Huang, Z., Wu, W., Nahrstedt, K., Rivas, R., Arefin, A.: SyncCast: synchronized dissemination in multi-site interactive 3D tele-immersion. In: Proceedings of MMSys, USA (2011)Kim, S.-J., Kuester, F., Kim, K.: A global timestamp-based approach for enhanced data consistency and fairness in collaborative virtual environments. ACM/Springer Multimedia Syst. J. 10(3), 220–229 (2005)Schooler, E.: Distributed music: a foray into networked performance. In: International Network Music Festival, Santa Monica, CA (1993)Miyashita, Y., Ishibashi, Y., Fukushima, N., Sugawara, S., Psannis K.E.: QoE assessment of group synchronization in networked chorus with voice and video. In: Proceedings of IEEE TENCON’11, pp. 393–397 (2011)Hesselman, C., Abbadessa, D., Van Der Beek, W., et al.: Sharing enriched multimedia experiences across heterogeneous network infrastructures. IEEE Commun. Mag. 48(6), 54–65 (2010)Montpetit, M., Klym, N., Mirlacher, T.: The future of IPTV—Connected, mobile, personal and social. Multimedia Tools Appl J 53(3), 519–532 (2011)Cesar, P., Bulterman, D.C.A., Jansen, J.: Leveraging the user impact: an architecture for secondary screens usage in an interactive television environment. ACM/Springer Multimedia Syst. 15(3), 127–142 (2009)Lukosch, S.: Transparent latecomer support for synchronous groupware. In: Proceedings of 9th International Workshop on Groupware (CRIWG), Grenoble, France, pp. 26–41 (2003)Steinmetz, R.: Human perception of jitter and media synchronization. IEEE J. Sel. Areas Commun. 14(1), 61–72 (1996)Stokking, H., Van Deventer, M.O., Niamut, O.A., Walraven, F.A., Mekuria, R.N.: IPTV inter-destination synchronization: a network-based approach, ICIN’2010, Berlin (2010)Mekuria, R.N.: Inter-destination media synchronization for TV broadcasts, Master Thesis, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Network architecture and Services, Delft University of Technology (2011)Pitt Ian, CS2511: Usability engineering lecture notes, localisation of sound sources. http://web.archive.org/web/20100410235208/http:/www.cs.ucc.ie/~ianp/CS2511/HAP.htmlNielsen, J.: Response times: the three important limits. http://www.useit.com/papers/responsetime.html (1994)ITU-T Rec G. 1010: End-User Multimedia QoS Categories. International Telecommunication Union, Geneva (2001)Biersack, E., Geyer, W.: Synchronized delivery and playout of distributed stored multimedia streams. ACM/Springer Multimedia Syst 7(1), 70–90 (1999)Xie, Y., Liu, C., Lee, M.J., Saadawi, T.N.: Adaptive multimedia synchronization in a teleconference system. ACM/Springer Multimedia Syst. 7(4), 326–337 (1999)Laoutaris, N., Stavrakakis, I.: Intrastream synchronization for continuous media streams: a survey of playout schedulers. IEEE Netw. Mag. 16(3), 30–40 (2002)Ishibashi, Y., Tsuji, A., Tasaka, S.: A group synchronization mechanism for stored media in multicast communications. In: Proceedings of the INFOCOM ‘97, Washington (1997)Ishibashi, Y., Tasaka, S.: A group synchronization mechanism for live media in multicast communications. IEEE GLOBECOM’97, pp. 746–752 (1997)Boronat, F., Guerri, J.C., Lloret, J.: An RTP/RTCP based approach for multimedia group and inter-stream synchronization. Multimedia Tools Appl. J. 40(2), 285–319 (2008)Ishibashi, I., Tasaka, S.: A distributed control scheme for group synchronization in multicast communications. In: Proceedings of International Symposium Communications, Kaohsiung, Taiwan, pp. 317–323 (1999)Lu, Y., Fallica, B., Kuipers, F.A., Kooij, R.E., Van Mieghem, P.: Assessing the quality of experience of SopCast. Int. J. Internet Protoc. Technol 4(1), 11–19 (2009)Shamma, D.A., Bastea-Forte, M., Joubert, N., Liu, Y.: Enhancing online personal connections through synchronized sharing of online video, ACM CHI’08 Extended Abstracts, Florence (2008)Ishibashi, Y., Tasaka, S.: A distributed control scheme for causality and media synchronization in networked multimedia games. In: Proceedings of 11th International Conference on Computer Communications and Networks, pp. 144–149, Miami, USA (2002)Ishibashi, Y., Tomaru, K., Tasaka, S., Inazumi, K.: Group synchronization in networked virtual environments. In: Proceedings of the 38th IEEE International Conference on Communications, pp. 885–890, Alaska, USA (2003)Tasaka, S., Ishibashi, Y., Hayashi, M.: Inter–destination synchronization quality in an integrated wired and wireless network with handover. IEEE GLOBECOM 2, 1560–1565 (2002)Kurokawa, Y., Ishibashi, Y., Asano, T.: Group synchronization control in a remote haptic drawing system. In: Proceedings of IEEE International Conference on Multimedia and Expo, pp. 572–575, Beijing, China (2007)Hashimoto, T., Ishibashi, Y.: Group Synchronization Control over Haptic Media in a Networked Real-Time Game with Collaborative Work, Netgames’06, Singapore (2006)Nunome, T., Tasaka, S.: Inter-destination synchronization quality in a multicast mobile ad hoc network. In: Proceedings of IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications, pp. 1366–1370, Berlin, Germany (2005)Brandenburg, R., van Stokking, H., Van Deventer, M.O., Boronat, F., Montagud, M., Gross, K.: RTCP for inter-destination media synchronization, draft-brandenburg-avtcore-rtcp-for-idms-03.txt. In: IETF Audio/Video Transport Core Maintenance Working Group, Internet Draft, March 9 (2012)ETSI TS 181 016 V3.3.1 (2009-07) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); Service Layer Requirements to integrate NGN Services and IPTVETSI TS 182 027 V3.5.1 (2011-03) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); IPTV Architecture; IPTV functions supported by the IMS subsystemETSI TS 183 063 V3.5.2 (2011-03) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); IMS-based IPTV stage 3 specificationBrandenburg van, R., et al.: RTCP XR Block Type for inter-destination media synchronization, draft-brandenburg-avt-rtcp-for-idms-00.txt. In: IETF Audio/Video Transport Working Group, Internet Draft, Sept 24, 2010Williams, A., et al.: RTP Clock Source Signalling, draft-williams-avtcore-clksrc-00. In: IETF Audio/Video Transport Working Group, Internet Draft, February 28, 201
    • …
    corecore