858 research outputs found

    Energy-Efficient Fault-Tolerant Scheduling Algorithm for Real-Time Tasks in Cloud-Based 5G Networks

    Full text link
    © 2013 IEEE. Green computing has become a hot issue for both academia and industry. The fifth-generation (5G) mobile networks put forward a high request for energy efficiency and low latency. The cloud radio access network provides efficient resource use, high performance, and high availability for 5G systems. However, hardware and software faults of cloud systems may lead to failure in providing real-time services. Developing fault tolerance technique can efficiently enhance the reliability and availability of real-time cloud services. The core idea of fault-tolerant scheduling algorithm is introducing redundancy to ensure that the tasks can be finished in the case of permanent or transient system failure. Nevertheless, the redundancy incurs extra overhead for cloud systems, which results in considerable energy consumption. In this paper, we focus on the problem of how to reduce the energy consumption when providing fault tolerance. We first propose a novel primary-backup-based fault-tolerant scheduling architecture for real-time tasks in the cloud environment. Based on the architecture, we present an energy-efficient fault-tolerant scheduling algorithm for real-time tasks (EFTR). EFTR adopts a proactive strategy to increase the system processing capacity and employs a rearrangement mechanism to improve the resource utilization. Simulation experiments are conducted on the CloudSim platform to evaluate the feasibility and effectiveness of EFTR. Compared with the existing fault-tolerant scheduling algorithms, EFTR shows excellent performance in energy conservation and task schedulability

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    Mobile cloud computing and network function virtualization for 5g systems

    Get PDF
    The recent growth of the number of smart mobile devices and the emergence of complex multimedia mobile applications have brought new challenges to the design of wireless mobile networks. The envisioned Fifth-Generation (5G) systems are equipped with different technical solutions that can accommodate the increasing demands for high date rate, latency-limited, energy-efficient and reliable mobile communication networks. Mobile Cloud Computing (MCC) is a key technology in 5G systems that enables the offloading of computationally heavy applications, such as for augmented or virtual reality, object recognition, or gaming from mobile devices to cloudlet or cloud servers, which are connected to wireless access points, either directly or through finite-capacity backhaul links. Given the battery-limited nature of mobile devices, mobile cloud computing is deemed to be an important enabler for the provision of such advanced applications. However, computational tasks offloading, and due to the variability of the communication network through which the cloud or cloudlet is accessed, may incur unpredictable energy expenditure or intolerable delay for the communications between mobile devices and the cloud or cloudlet servers. Therefore, the design of a mobile cloud computing system is investigated by jointly optimizing the allocation of radio, computational resources and backhaul resources in both uplink and downlink directions. Moreover, the users selected for cloud offloading need to have an energy consumption that is smaller than the amount required for local computing, which is achieved by means of user scheduling. Motivated by the application-centric drift of 5G systems and the advances in smart devices manufacturing technologies, new brand of mobile applications are developed that are immersive, ubiquitous and highly-collaborative in nature. For example, Augmented Reality (AR) mobile applications have inherent collaborative properties in terms of data collection in the uplink, computing at the cloud, and data delivery in the downlink. Therefore, the optimization of the shared computing and communication resources in MCC not only benefit from the joint allocation of both resources, but also can be more efficiently enhanced by sharing the offloaded data and computations among multiple users. As a result, a resource allocation approach whereby transmitted, received and processed data are shared partially among the users leads to more efficient utilization of the communication and computational resources. As a suggested architecture in 5G systems, MCC decouples the computing functionality from the platform location through the use of software virtualization to allow flexible provisioning of the provided services. Another virtualization-based technology in 5G systems is Network Function Virtualization (NFV) which prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. For that reason, the development of fault-tolerant virtualization strategies for MCC and NFV is necessary to ensure reliability of the provided services

    DeepFT: Fault-tolerant edge computing using a self-supervised deep surrogate model

    Get PDF
    The emergence of latency-critical AI applications has been supported by the evolution of the edge computing paradigm. However, edge solutions are typically resource-constrained, posing reliability challenges due to heightened contention for compute capacities and faulty application behavior in the presence of overload conditions. Although a large amount of generated log data can be mined for fault prediction, labeling this data for training is a manual process and thus a limiting factor for automation. Due to this, many companies resort to unsupervised fault-tolerance models. Yet, failure models of this kind can incur a loss of accuracy when they need to adapt to non-stationary workloads and diverse host characteristics. Thus, we propose a novel modeling approach, DeepFT, to proactively avoid system overloads and their adverse effects by optimizing the task scheduling decisions. DeepFT uses a deep-surrogate model to accurately predict and diagnose faults in the system and co-simulation based self-supervised learning to dynamically adapt the model in volatile settings. Experimentation on an edge cluster shows that DeepFT can outperform state-of-the-art methods in fault-detection and QoS metrics. Specifically, DeepFT gives the highest F1 scores for fault-detection, reducing service deadline violations by up to 37% while also improving response time by up to 9%

    A Survey and Future Directions on Clustering: From WSNs to IoT and Modern Networking Paradigms

    Get PDF
    Many Internet of Things (IoT) networks are created as an overlay over traditional ad-hoc networks such as Zigbee. Moreover, IoT networks can resemble ad-hoc networks over networks that support device-to-device (D2D) communication, e.g., D2D-enabled cellular networks and WiFi-Direct. In these ad-hoc types of IoT networks, efficient topology management is a crucial requirement, and in particular in massive scale deployments. Traditionally, clustering has been recognized as a common approach for topology management in ad-hoc networks, e.g., in Wireless Sensor Networks (WSNs). Topology management in WSNs and ad-hoc IoT networks has many design commonalities as both need to transfer data to the destination hop by hop. Thus, WSN clustering techniques can presumably be applied for topology management in ad-hoc IoT networks. This requires a comprehensive study on WSN clustering techniques and investigating their applicability to ad-hoc IoT networks. In this article, we conduct a survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT, such as network heterogeneity and mobility. Beyond that, we investigate the advantages and challenges of clustering when IoT is integrated with modern computing and communication technologies such as Blockchain, Fog/Edge computing, and 5G. This survey provides useful insights into research on IoT clustering, allows broader understanding of its design challenges for IoT networks, and sheds light on its future applications in modern technologies integrated with IoT.acceptedVersio
    • …
    corecore