5,750 research outputs found

    Maximizing Service Reliability in Distributed Computing Systems with Random Node Failures: Theory and Implementation

    Get PDF
    In distributed computing systems (DCSs) where server nodes can fail permanently with nonzero probability, the system performance can be assessed by means of the service reliability, defined as the probability of serving all the tasks queued in the DCS before all the nodes fail. This paper presents a rigorous probabilistic framework to analytically characterize the service reliability of a DCS in the presence of communication uncertainties and stochastic topological changes due to node deletions. The framework considers a system composed of heterogeneous nodes with stochastic service and failure times and a communication network imposing random tangible delays. The framework also permits arbitrarily specified, distributed load-balancing actions to be taken by the individual nodes in order to improve the service reliability. The presented analysis is based upon a novel use of the concept of stochastic regeneration, which is exploited to derive a system of difference-differential equations characterizing the service reliability. The theory is further utilized to optimize certain load-balancing policies for maximal service reliability; the optimization is carried out by means of an algorithm that scales linearly with the number of nodes in the system. The analytical model is validated using both Monte Carlo simulations and experimental data collected from a DCS testbed

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Maximizing Computational Profit in Grid Resource Allocation Using Dynamic Algorithm

    Get PDF
    Grid computing, one of the most trendy phrase used in IT, is emerging vastly distributed computational paradigm. A computational grid provides a collaborative environment of the hefty number of resources capable to do high computing performance to reach the common goal. Grid computing can be called as super virtual computer, it ensemble large scale geographically distributed heterogeneous resources. Resource allocation is a key element in the grid computing and grid resource may leave at anytime from grid environment. Despite a number of benefits in grid computing, still resource allocation is a challenging task in the grid. This work investigates to maximize the profits by analyzing how the tasks are allocated to grid resources effectively according to quality of service parameter and gratifying user requisition. A fusion of SS-GA algorithm has introduced to answer the above raised question about the resource allocation problem based on grid user requisition. The swift uses genetic algorithms heuristic functions and makes an effective resource allocation process in grid environment. The result of proposed fusion of SS-GA algorithm ameliorates the grid resource allocation

    Game-theoretic, market and meta-heuristics approaches for modelling scheduling and resource allocation in grid systems

    Get PDF
    Task scheduling and resource allocation are the crucial issues in any large scale distributed system, such as Computational Grids (CGs). However, traditional computational models and resolution methods cannot effectively tackle the complex nature of Grid, where the resources and users belong to many administrative domains with their own access policies, users' privileges, etc. Recently, researchers are investigating the use of game theoretic approaches for modelling task and resource allocation problems in CGs. In this paper, we present a compact survey of the most relevant research proposals in the literature to use game-based models for the resource allocation problems and their resolution using metaheuristic methods. We emphasize the need of the translation of the traditional economical models into the game scenarios and the use of metaheuristic schedulers for solving such games in order to address the new complex scheduling and allocation criterions. We study the case of asymmetric Stackelberg game used for modelling the Grid users' behavior, where the security and reliability criterions are aggregated and defined as the users' costs functions. The obtained results show the efficiency of the hybridization of heuristic-based approaches with game models, which enables to include additional requirements and features into the computational models and tackle more effectively the resolution of the applied schedulers.Peer ReviewedPostprint (published version

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Decision support for build-to-order supply chain management through multiobjective optimization

    Get PDF
    This is the post-print version of the final paper published in International Journal of Production Economics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.This paper aims to identify the gaps in decision-making support based on multiobjective optimization (MOO) for build-to-order supply chain management (BTO-SCM). To this end, it reviews the literature available on modelling build-to-order supply chains (BTO-SC) with the focus on adopting MOO techniques as a decision support tool. The literature has been classified based on the nature of the decisions in different part of the supply chain, and the key decision areas across a typical BTO-SC are discussed in detail. Available software packages suitable for supporting decision making in BTO supply chains are also identified and their related solutions are outlined. The gap between the modelling and optimization techniques developed in the literature and the decision support needed in practice are highlighted. Future research directions to better exploit the decision support capabilities of MOO are proposed. These include: reformulation of the extant optimization models with a MOO perspective, development of decision supports for interfaces not involving manufacturers, development of scenarios around service-based objectives, development of efficient solution tools, considering the interests of each supply chain party as a separate objective to account for fair treatment of their requirements, and applying the existing methodologies on real-life data sets.Brunel Research Initiative and Enterprise Fund (BRIEF
    • …
    corecore