850 research outputs found

    An Overview of the Networking Issues of Cloud Gaming: A Literature Review

    Get PDF
    With the increasing prevalence of video games comes innovations that aim to evolve them. Cloud gaming is poised as the next phase of gaming. It enables users to play video games on any internet-enabled device. Such improvement could, therefore, enhance the processing power of existing devices and solve the need to spend large amounts of money on the latest gaming equipment. However, others argue that it may be far from being practically functional. Since cloud gaming places dependency on networks, new issues emerge. In relation, this paper is a review of the networking perspective of cloud gaming. Specifically, the paper analyzes its issues and challenges along with possible solutions. In order to accomplish the study, a literature review was performed. Results show that there are numerous issues and challenges regarding cloud gaming networks. Generally, cloud gaming has problems with its network quality of service (QoS) and quality of experience (QoE). The poor QoS and QoE of cloud gaming can be linked to unsatisfactory latency, bandwidth, delay, packet loss, and graphics quality. Moreover, the cost of providing the service and the complexity of implementing cloud gaming were considered challenges. For these issues and challenges, solutions were found. The solutions include lag or latency compensation, compression with encoding techniques, client computing power, edge computing, machine learning, frame adaption, and GPU-based server selection. However, these have limitations and may not always be applicable. Thus, even if solutions exist, it would be beneficial to analyze the networking side of cloud gaming further

    TAME: an Efficient Task Allocation Algorithm for Integrated Mobile Gaming

    Get PDF
    We consider an integrated mobile gaming platform, in which the mobile device (e.g., smartphone) of a player can offload some game tasks toward a server as well as some neighboring mobile devices. The advantages of such a platform are manyfold: it can lead to an improved game experience, to a better use of energy resources, and, while offloading tasks to other mobile users, to the exploitation of the unused computing and storage resources of the mobile equipments, thus reducing the bandwidth and computing costs of the overall system. In this context, we formulate the problem of offloading the game computational tasks as an optimization problem that minimizes the maximum energy consumption across a set of mobile devices, under the constraints of a maximum response time and a limited availability of computation, communication and storage resources. In light of the problem complexity, we then propose a heuristic, called TAME, which is shown to closely approximate the optimal solution in all scenarios we considered. TAME also outperforms state-of-the-art algorithms under both synthetic and real scenarios, which have been devised based on a realistic and detailed energy consumption model for computation and communication resources. Our results, although tailored to mobile gaming, could be extended to other applications where it may be beneficial to offload computational and storage tasks through device-to-device communications, as enabled by Wi-Fi, Bluetooth, or the upcoming 5G technology

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    Managing IT Operations in a Cloud-driven Enterprise: Case Studies

    Get PDF
    Enterprise IT needs a new approach to manage processes, applications and infrastructure which are distributed across a mix of environments. In an Enterprise traditionally a request to deliver an application to business could take weeks or months due to decision-making functions, multiple approval bodies and processes that exist within IT departments. These delays in delivering a requested service can lead to dissatisfaction, with the result that the line-of-business group may seek alternative sources of IT capabilities. Also the complex IT infrastructure of these enterprises cannot keep up with the demand of new applications and services from an increasingly dispersed and mobile workforce which results in slower rollout of critical applications and services, limited resources, poor operation visibility and control. In such scenarios, it’s better to adopt cloud services to substitute for new application deployment otherwise most Enterprise IT organizations face the risk of losing 'market share' to the Public Cloud. Using Cloud Model the organizations should increase ROI, lower TCO and operate with seamless IT operations. It also helps to beat shadow IT and the practice of resource over-or under provisioning. In this research paper we have given two case studies where we migrated two Enterprise IT application to public clouds for the purpose of lower TCO and higher ROI. By migrating, the IT organizations improved IT agility, enterprise-class software for performance, security and control. In this paper, we also focus on the advantages and challenges while adopting cloud services

    Risk-aware Adaptive Virtual CPU Oversubscription in Microsoft Cloud via Prototypical Human-in-the-loop Imitation Learning

    Full text link
    Oversubscription is a prevalent practice in cloud services where the system offers more virtual resources, such as virtual cores in virtual machines, to users or applications than its available physical capacity for reducing revenue loss due to unused/redundant capacity. While oversubscription can potentially lead to significant enhancement in efficient resource utilization, the caveat is that it comes with the risks of overloading and introducing jitter at the level of physical nodes if all the co-located virtual machines have high utilization. Thus suitable oversubscription policies which maximize utilization while mitigating risks are paramount for cost-effective seamless cloud experiences. Most cloud platforms presently rely on static heuristics-driven decisions about oversubscription activation and limits, which either leads to overloading or stranded resources. Designing an intelligent oversubscription policy that can adapt to resource utilization patterns and jointly optimizes benefits and risks is, largely, an unsolved problem. We address this challenge with our proposed novel HuMan-in-the-loop Protoypical Imitation Learning (ProtoHAIL) framework that exploits approximate symmetries in utilization patterns to learn suitable policies. Also, our human-in-the-loop (knowledge-infused) training allows for learning safer policies that are robust to noise and sparsity. Our empirical investigations on real data show orders of magnitude reduction in risk and significant increase in benefits (saving stranded cores) in Microsoft cloud platform for 1st party (internal services).Comment: 9 pages, 3 figure

    Monitoring in fog computing: state-of-the-art and research challenges

    Get PDF
    Fog computing has rapidly become a widely accepted computing paradigm to mitigate cloud computing-based infrastructure limitations such as scarcity of bandwidth, large latency, security, and privacy issues. Fog computing resources and applications dynamically vary at run-time, and they are highly distributed, mobile, and appear-disappear rapidly at any time over the internet. Therefore, to ensure the quality of service and experience for end-users, it is necessary to comply with a comprehensive monitoring approach. However, the volatility and dynamism characteristics of fog resources make the monitoring design complex and cumbersome. The aim of this article is therefore three-fold: 1) to analyse fog computing-based infrastructures and existing monitoring solutions; 2) to highlight the main requirements and challenges based on a taxonomy; 3) to identify open issues and potential future research directions.This work has been (partially) funded by H2020 EU/TW 5G-DIVE (Grant 859881) and H2020 5Growth (Grant 856709). It has been also funded by the Spanish State Research Agency (TRUE5G project, PID2019-108713RB-C52 PID2019-108713RB-C52 / AEI / 10.13039/501100011033)
    corecore