85 research outputs found

    ieee access special section editorial recent advances on radio access and security methods in 5g networks

    Get PDF
    Serviceability is the ability of a network to serve user equipments (UEs) within desired requirements (e.g., throughput, delay, and packet loss). High serviceability is considered as one of the key foundational criteria towards a successful fog radio access infrastructure satisfying the Internet of Things paradigm in the 5G era. In the article by Dao et al. , "Adaptive resource balancing for serviceability maximization in fog radio access networks," the authors propose an adaptive resource balancing (ARB) scheme for serviceability maximization in fog radio access networks wherein the resource block (RB) utilization among remote radio heads (RRHs) is balanced using the backpressure algorithm with respect to a time-varying network topology issued by potential RRH motilities. The optimal UE selection for service migration from a high-RB-utilization RRH to its neighboring low RB-utilization RRHs is determined by the Hungarian method to minimize RB occupation after moving the service. Analytical results reveal that the proposed ARB scheme provides substantial gains compared to the standalone capacity-aware, max-rate, and cache-aware UE association approaches in terms of serviceability, availability, and throughput

    A Survey on UAV-enabled Edge Computing: Resource Management Perspective

    Full text link
    Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.Comment: 36 pages, Accepted to ACM CSU

    Modern computing: vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    QoS-Aware Energy Management and Node Scheduling Schemes for Sensor Network-Based Surveillance Applications

    Full text link
    Recent advances in wireless technologies have led to an increased deployment of Wireless Sensor Networks (WSNs) for a plethora of diverse surveillance applications such as health, military, and environmental. However, sensor nodes in WSNs usually suffer from short device lifetime due to severe energy constraints and therefore, cannot guarantee to meet the Quality of Service (QoS) needs of various applications. This is proving to be a major hindrance to the widespread adoption of WSNs for such applications. Therefore, to extend the lifetime of WSNs, it is critical to optimize the energy usage in sensor nodes that are often deployed in remote and hostile terrains. To this effect, several energy management schemes have been proposed recently. Node scheduling is one such strategy that can prolong the lifetime of WSNs and also helps to balance the workload among the sensor nodes. In this article, we discuss on the energy management techniques of WSN with a particular emphasis on node scheduling and propose an energy management life-cycle model and an energy conservation pyramid to extend the network lifetime of WSNs. We have provided a detailed classification and evaluation of various node scheduling schemes in terms of their ability to fulfill essential QoS requirements, namely coverage, connectivity, fault tolerance, and security. We considered essential design issues such as network type, deployment pattern, sensing model in the classification process. Furthermore, we have discussed the operational characteristics of schemes with their related merits and demerits. We have compared the efficacy of a few well known graph-based scheduling schemes with suitable performance analysis graph. Finally, we study challenges in designing and implementing node scheduling schemes from a QoS perspective and outline open research problems

    Software-Defined Networks for Resource Allocation in Cloud Computing: A Survey

    Get PDF
    Cloud computing has a shared set of resources, including physical servers, networks, storage, and user applications. Resource allocation is a critical issue for cloud computing, especially in Infrastructure-as-a-Service (IaaS). The decision-making process in the cloud computing network is non-trivial as it is handled by switches and routers. Moreover, the network concept drifts resulting from changing user demands are among the problems affecting cloud computing. The cloud data center needs agile and elastic network control functions with control of computing resources to ensure proper virtual machine (VM) operations, traffic performance, and energy conservation. Software-Defined Network (SDN) proffers new opportunities to blueprint resource management to handle cloud services allocation while dynamically updating traffic requirements of running VMs. The inclusion of an SDN for managing the infrastructure in a cloud data center better empowers cloud computing, making it easier to allocate resources. In this survey, we discuss and survey resource allocation in cloud computing based on SDN. It is noted that various related studies did not contain all the required requirements. This study is intended to enhance resource allocation mechanisms that involve both cloud computing and SDN domains. Consequently, we analyze resource allocation mechanisms utilized by various researchers; we categorize and evaluate them based on the measured parameters and the problems presented. This survey also contributes to a better understanding of the core of current research that will allow researchers to obtain further information about the possible cloud computing strategies relevant to IaaS resource allocation

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies
    corecore