1,455 research outputs found

    Proxy Caching for Video-on-Demand Using Flexible Starting Point Selection

    Get PDF

    Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    Get PDF
    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends

    An experimental dynamic RAM video cache

    Get PDF
    As technological advances continue to be made, the demand for more efficient distributed multimedia systems is also affirmed. Current support for end-to-end QoS is still limited; consequently mechanisms are required to provide flexibility in resource loading. One such mechanism, caching, may be introduced both in the end-system and network to facilitate intelligent load balancing and resource management. We introduce new work at Lancaster University investigating the use of transparent network caches for MPEG-2. A novel architecture is proposed, based on router-oriented caching and the employment of large scale dynamic RAM as the sole caching medium. The architecture also proposes the use of the ISO/IEC standardised DSM-CC protocol as a basic control infrastructure and the caching of pre-built transport packets (UDP/IP) in the data plane. Finally, the work discussed is in its infancy and consequently focuses upon the design and implementation of the caching architecture rather than an investigation into performance gains, which we intend to make in a continuation of the work

    Cooperative Interval Caching in Clustered Multimedia Servers

    Get PDF
    In this project, we design a cooperative interval caching (CIC) algorithm for clustered video servers, and evaluate its performance through simulation. The CIC algorithm describes how distributed caches in the cluster cooperate to serve a given request. With CIC, a clustered server can accommodate twice (95%) more number of cached streams than the clustered server without cache cooperation. There are two major processes of CIC to find available cache space for a given request in the cluster: to find the server containing the information about the preceding request of the given request; and to find another server which may have available cache space if the current server turns out not to have enough cache space. The performance study shows that it is better to direct the requests of the same movie to the same server so that a request can always find the information of its preceding request from the same server. The CIC algorithm uses scoreboard mechanism to achieve this goal. The performance results also show that when the current server fails to find cache space for a given request, randomly selecting a server works well to find the next server which may have available cache space. The combination of scoreboard and random selection to find the preceding request information and the next available server outperforms other combinations of different approaches by 86%. With CIC, the cooperative distributed caches can support as many cached streams as one integrated cache does. In some cases, the cooperative distributed caches accommodate more number of cached streams than one integrated cache would do. The CIC algorithm makes every server in the cluster perform identical tasks to eliminate any single point of failure, there by increasing availability of the server cluster. The CIC algorithm also specifies how to smoothly add or remove a server to or from the cluster to provide the server with scalability
    corecore