1,090 research outputs found

    Study on Energy Consumption and Coverage of Hierarchical Cooperation of Small Cell Base Stations in Heterogeneous Networks

    Full text link
    The demand for communication services in the era of intelligent terminals is unprecedented and huge. To meet such development, modern wireless communications must provide higher quality services with higher energy efficiency in terms of system capacity and quality of service (QoS), which could be achieved by the high-speed data rate, the wider coverage and the higher band utilization. In this paper, we propose a way to offload users from a macro base station(MBS) with a hierarchical distribution of small cell base stations(SBS). The connection probability is the key indicator of the implementation of the unload operation. Furthermore, we measure the service performance of the system by finding the conditional probability-coverage probability with the certain SNR threshold as the condition, that is, the probability of obtaining the minimum communication quality when the different base stations are connected to the user. Then, user-centered total energy consumption of the system is respectively obtained when the macro base station(MBS) and the small cell base stations(SBS) serve each of the users. The simulation results show that the hierarchical SBS cooperation in heterogeneous networks can provide a higher system total coverage probability for the system with a lower overall system energy consumption than MBS.Comment: 6 pages, 7 figures, accepted by ICACT201

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Do we all really know what a fog node is? Current trends towards an open definition

    Get PDF
    Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft
    • …
    corecore