11 research outputs found

    360° mulsemedia experience over next generation wireless networks - a reinforcement learning approach

    Get PDF
    The next generation of wireless networks targets aspiring key performance indicators, like very low latency, higher data rates and more capacity, paving the way for new generations of video streaming technologies, such as 360° or omnidirectional videos. One possible application that could revolutionize the streaming technology is the 360° MULtiple SEnsorial MEDIA (MULSEMEDIA) which enriches the 360° video content with other media objects like olfactory, haptic or even thermoceptic ones. However, the adoption of the 360° Mulsemedia applications might be hindered by the strict Quality of Service (QoS) requirements, like very large bandwidth and low latency for fast responsiveness to the users, inputs that could impact their Quality of Experience (QoE). To this extent, this paper introduces the new concept of 360° Mulsemedia as well as it proposes the use of Reinforcement Learning to enable QoS provisioning over the next generation wireless networks that influences the QoE of the end-users

    360° mulsemedia experience over next generation wireless networks - a reinforcement learning approach

    Get PDF
    The next generation of wireless networks targets aspiring key performance indicators, like very low latency, higher data rates and more capacity, paving the way for new generations of video streaming technologies, such as 360° or omnidirectional videos. One possible application that could revolutionize the streaming technology is the 360° MULtiple SEnsorial MEDIA (MULSEMEDIA) which enriches the 360° video content with other media objects like olfactory, haptic or even thermoceptic ones. However, the adoption of the 360° Mulsemedia applications might be hindered by the strict Quality of Service (QoS) requirements, like very large bandwidth and low latency for fast responsiveness to the users, inputs that could impact their Quality of Experience (QoE). To this extent, this paper introduces the new concept of 360° Mulsemedia as well as it proposes the use of Reinforcement Learning to enable QoS provisioning over the next generation wireless networks that influences the QoE of the end-users

    Do I smell coffee? The tale of a 360º Mulsemedia experience

    Get PDF
    One of the main challenges in current multimedia networking environments is to find solutions to help accommodate the next generation of mobile application classes with stringent Quality of Service (QoS) requirements whilst enabling Quality of Experience (QoE) provisioning for users. One such application class, featured in this paper, is 360º mulsemedia—multiple sensorial media—which enriches 360º video by adding sensory effects that stimulate human senses beyond those of sight and hearing, such as the tactile and olfactory ones. In this paper, we present a conceptual framework for 360º mulsemedia delivery and a 360º mulsemedia-based prototype that enables users to experience 360º mulsemedia content. User evaluations revealed that higher video resolutions do not necessarily lead to the highest QoE levels in our experimental setup. Therefore, bandwidth savings can be leveraged with no detrimental impact on QoE

    Low-latency Cloud-based Volumetric Video Streaming Using Head Motion Prediction

    Full text link
    Volumetric video is an emerging key technology for immersive representation of 3D spaces and objects. Rendering volumetric video requires lots of computational power which is challenging especially for mobile devices. To mitigate this, we developed a streaming system that renders a 2D view from the volumetric video at a cloud server and streams a 2D video stream to the client. However, such network-based processing increases the motion-to-photon (M2P) latency due to the additional network and processing delays. In order to compensate the added latency, prediction of the future user pose is necessary. We developed a head motion prediction model and investigated its potential to reduce the M2P latency for different look-ahead times. Our results show that the presented model reduces the rendering errors caused by the M2P latency compared to a baseline system in which no prediction is performed.Comment: 7 pages, 4 figure
    corecore