3 research outputs found

    Platform for innovative content distribution

    Get PDF
    Nowadays users are starting to consume new innovative content in a domestic environment due to the introduction of head mounted displays. The problem we face in this work is how to access this virtual reality (VR) content. This work consists in develop a platform for innovative content distribution using different techniques for the European Horizon 2020 project ImmersiaTV which is redefining the end-to-end broadcast chain: production, distribution and delivery of multiplatform synchronized content based in omnidirectional video Inside ImmersiaTV, this work will focus in the second piece of the puzzle, the creation of the content distribution platform. Now that the production of VR content is booming, these kind of distribution platforms are beginning to come to light as the demand of VR content is growing, therefore the offer has to follow the same path. What we want to create with this platform is a spot in the internet for users who want to consume VR content have a quick and easy way to access it. This work has focused on content based on omnidirectional videos (for ImmersiaTV) and sensor-generated mesh (for this particular work), whose publication and distribution is done through a web application. The omnidirectional video-based content is produced with an Adobe Premiere Pro plug-in developed in ImmersiaTV. The sensor-generated mesh content is produced with sensors like Kinect 2 and Structure in this work. Traditional content like omnidirectional video and computer-generated imagery are content that on the one hand are very quick to produce (omnidirectional video) at the expenses of the experience quality or very costly in production time (traditional CGI), moreover, sensor-generated mesh content are quick to produce with satisfactory results in terms of the VR experience quality. In order to test the platform we have used an existing player created in the ImmersiaTV project. However, to test the innovative formats, we have developed a specific application. This application also allows users to compare the difference between the VR experience through three different content formats such as omnidirectional video, traditional CGI and sensor-based. Users can also experience the difference between being represented by an avatar modelled by computer or a live 3D model created by the Kinect 2 inside the VR environment. Although this work does not attempt to measure the results of the user's VR experience if not to distribute the content in order to be consumed, during the development of the platform we have seen a big difference, in terms of experience quality, between the different content scenarios

    Service specific management and orchestration for a content delivery network

    Get PDF
    Any non-trivial network service requires service specific orchestration to meet its carrier-grade requirements regarding resiliency, availability, etc. How the network service components are mapped on the substrate, how VNFs get reconfigured after a monitored event or how they scale, only network service/function developers know how to execute such workflows to guarantee an optimal QoS. It is therefore of paramount importance that NFV Service Platforms allow developer specified input when performing such life cycle events, instead of defining generic workflows. Within the scope of the SONATA and SGTANGO projects, a mechanism was designed that allows developers to create and execute Service and Function Specific Managers. These managers are processes, created by the developer, that define service or function specific orchestration behaviour. The SONATA Service Platform executes these managers to overwrite generic Service Platform behaviour, creating developer customised life cycle workflows. We will demonstrate the development, testing and operational execution of these managers by using a Content Delivery Network which requires specific placement and scaling behaviour

    Platform for innovative content distribution

    No full text
    Nowadays users are starting to consume new innovative content in a domestic environment due to the introduction of head mounted displays. The problem we face in this work is how to access this virtual reality (VR) content. This work consists in develop a platform for innovative content distribution using different techniques for the European Horizon 2020 project ImmersiaTV which is redefining the end-to-end broadcast chain: production, distribution and delivery of multiplatform synchronized content based in omnidirectional video Inside ImmersiaTV, this work will focus in the second piece of the puzzle, the creation of the content distribution platform. Now that the production of VR content is booming, these kind of distribution platforms are beginning to come to light as the demand of VR content is growing, therefore the offer has to follow the same path. What we want to create with this platform is a spot in the internet for users who want to consume VR content have a quick and easy way to access it. This work has focused on content based on omnidirectional videos (for ImmersiaTV) and sensor-generated mesh (for this particular work), whose publication and distribution is done through a web application. The omnidirectional video-based content is produced with an Adobe Premiere Pro plug-in developed in ImmersiaTV. The sensor-generated mesh content is produced with sensors like Kinect 2 and Structure in this work. Traditional content like omnidirectional video and computer-generated imagery are content that on the one hand are very quick to produce (omnidirectional video) at the expenses of the experience quality or very costly in production time (traditional CGI), moreover, sensor-generated mesh content are quick to produce with satisfactory results in terms of the VR experience quality. In order to test the platform we have used an existing player created in the ImmersiaTV project. However, to test the innovative formats, we have developed a specific application. This application also allows users to compare the difference between the VR experience through three different content formats such as omnidirectional video, traditional CGI and sensor-based. Users can also experience the difference between being represented by an avatar modelled by computer or a live 3D model created by the Kinect 2 inside the VR environment. Although this work does not attempt to measure the results of the user's VR experience if not to distribute the content in order to be consumed, during the development of the platform we have seen a big difference, in terms of experience quality, between the different content scenarios
    corecore