7,639 research outputs found

    Maximizing Service Reliability in Distributed Computing Systems with Random Node Failures: Theory and Implementation

    Get PDF
    In distributed computing systems (DCSs) where server nodes can fail permanently with nonzero probability, the system performance can be assessed by means of the service reliability, defined as the probability of serving all the tasks queued in the DCS before all the nodes fail. This paper presents a rigorous probabilistic framework to analytically characterize the service reliability of a DCS in the presence of communication uncertainties and stochastic topological changes due to node deletions. The framework considers a system composed of heterogeneous nodes with stochastic service and failure times and a communication network imposing random tangible delays. The framework also permits arbitrarily specified, distributed load-balancing actions to be taken by the individual nodes in order to improve the service reliability. The presented analysis is based upon a novel use of the concept of stochastic regeneration, which is exploited to derive a system of difference-differential equations characterizing the service reliability. The theory is further utilized to optimize certain load-balancing policies for maximal service reliability; the optimization is carried out by means of an algorithm that scales linearly with the number of nodes in the system. The analytical model is validated using both Monte Carlo simulations and experimental data collected from a DCS testbed

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Scheduling in cloud and fog architecture: identification of limitations and suggestion of improvement perspectives

    Get PDF
    Application execution required in cloud and fog architectures are generally heterogeneous in terms of device and application contexts. Scaling these requirements on these architectures is an optimization problem with multiple restrictions. Despite countless efforts, task scheduling in these architectures continue to present some enticing challenges that can lead us to the question how tasks are routed between different physical devices, fog nodes and cloud. In fog, due to its density and heterogeneity of devices, the scheduling is very complex and in the literature, there are still few studies that have been conducted. However, scheduling in the cloud has been widely studied. Nonetheless, many surveys address this issue from the perspective of service providers or optimize application quality of service (QoS) levels. Also, they ignore contextual information at the level of the device and end users and their user experiences. In this paper, we conducted a systematic review of the literature on the main task by: scheduling algorithms in the existing cloud and fog architecture; studying and discussing their limitations, and we explored and suggested some perspectives for improvement.Calouste Gulbenkian Foundation, PhD scholarship No.234242, 2019.info:eu-repo/semantics/publishedVersio

    Maximizing Computational Profit in Grid Resource Allocation Using Dynamic Algorithm

    Get PDF
    Grid computing, one of the most trendy phrase used in IT, is emerging vastly distributed computational paradigm. A computational grid provides a collaborative environment of the hefty number of resources capable to do high computing performance to reach the common goal. Grid computing can be called as super virtual computer, it ensemble large scale geographically distributed heterogeneous resources. Resource allocation is a key element in the grid computing and grid resource may leave at anytime from grid environment. Despite a number of benefits in grid computing, still resource allocation is a challenging task in the grid. This work investigates to maximize the profits by analyzing how the tasks are allocated to grid resources effectively according to quality of service parameter and gratifying user requisition. A fusion of SS-GA algorithm has introduced to answer the above raised question about the resource allocation problem based on grid user requisition. The swift uses genetic algorithms heuristic functions and makes an effective resource allocation process in grid environment. The result of proposed fusion of SS-GA algorithm ameliorates the grid resource allocation

    Game-theoretic, market and meta-heuristics approaches for modelling scheduling and resource allocation in grid systems

    Get PDF
    Task scheduling and resource allocation are the crucial issues in any large scale distributed system, such as Computational Grids (CGs). However, traditional computational models and resolution methods cannot effectively tackle the complex nature of Grid, where the resources and users belong to many administrative domains with their own access policies, users' privileges, etc. Recently, researchers are investigating the use of game theoretic approaches for modelling task and resource allocation problems in CGs. In this paper, we present a compact survey of the most relevant research proposals in the literature to use game-based models for the resource allocation problems and their resolution using metaheuristic methods. We emphasize the need of the translation of the traditional economical models into the game scenarios and the use of metaheuristic schedulers for solving such games in order to address the new complex scheduling and allocation criterions. We study the case of asymmetric Stackelberg game used for modelling the Grid users' behavior, where the security and reliability criterions are aggregated and defined as the users' costs functions. The obtained results show the efficiency of the hybridization of heuristic-based approaches with game models, which enables to include additional requirements and features into the computational models and tackle more effectively the resolution of the applied schedulers.Peer ReviewedPostprint (published version

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • …
    corecore