59,392 research outputs found

    Self organising cloud cells: a resource efficient network densification strategy

    Get PDF
    Network densification is envisioned as the key enabler for 2020 vision that requires cellular systems to grow in capacity by hundreds of times to cope with unprecedented traffic growth trends being witnessed since advent of broadband on the move. However, increased energy consumption and complex mobility management associated with network densifications remain as the two main challenges to be addressed before further network densification can be exploited on a wide scale. In the wake of these challenges, this paper proposes and evaluates a novel dense network deployment strategy for increasing the capacity of future cellular systems without sacrificing energy efficiency and compromising mobility performance. Our deployment architecture consists of smart small cells, called cloud nodes, which provide data coverage to individual users on a demand bases while taking into account the spatial and temporal dynamics of user mobility and traffic. The decision to activate the cloud nodes, such that certain performance objectives at system level are targeted, is carried out by the overlaying macrocell based on a fuzzy-logic framework. We also compare the proposed architecture with conventional macrocell only deployment and pure microcell-based dense deployment in terms of blocking probability, handover probability and energy efficiency and discuss and quantify the trade-offs therein

    Evaluator services for optimised service placement in distributed heterogeneous cloud infrastructures

    Get PDF
    Optimal placement of demanding real-time interactive applications in a distributed heterogeneous cloud very quickly results in a complex tradeoff between the application constraints and resource capabilities. This requires very detailed information of the various requirements and capabilities of the applications and available resources. In this paper, we present a mathematical model for the service optimization problem and study the concept of evaluator services as a flexible and efficient solution for this complex problem. An evaluator service is a service probe that is deployed in particular runtime environments to assess the feasibility and cost-effectiveness of deploying a specific application in such environment. We discuss how this concept can be incorporated in a general framework such as the FUSION architecture and discuss the key benefits and tradeoffs for doing evaluator-based optimal service placement in widely distributed heterogeneous cloud environments

    Transparent and scalable client-side server selection using netlets

    Get PDF
    Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent

    Energy efficiency in heterogeneous wireless access networks

    Get PDF
    In this article, we bring forward the important aspect of energy savings in wireless access networks. We specifically focus on the energy saving opportunities in the recently evolving heterogeneous networks (HetNets), both Single- RAT and Multi-RAT. Issues such as sleep/wakeup cycles and interference management are discussed for co-channel Single-RAT HetNets. In addition to that, a simulation based study for LTE macro-femto HetNets is presented, indicating the need for dynamic energy efficient resource management schemes. Multi-RAT HetNets also come with challenges such as network integration, combined resource management and network selection. Along with a discussion on these challenges, we also investigate the performance of the conventional WLAN-first network selection mechanism in terms of energy efficiency (EE) and suggest that EE can be improved by the application of intelligent call admission control policies

    Adaptive fog service placement for real-time topology changes in Kubernetes clusters

    No full text
    Recent trends have caused a shift from services deployed solely in monolithic data centers in the cloud to services deployed in the fog (e.g. roadside units for smart highways, support services for IoT devices). Simultaneously, the variety and number of IoT devices has grown rapidly, along with their reliance on cloud services. Additionally, many of these devices are now themselves capable of running containers, allowing them to execute some services previously deployed in the fog. The combination of IoT devices and fog computing has many advantages in terms of efficiency and user experience, but the scale, volatile topology and heterogeneous network conditions of the fog and the edge also present problems for service deployment scheduling. Cloud service scheduling often takes a wide array of parameters into account to calculate optimal solutions. However, the algorithms used are not generally capable of handling the scale and volatility of the fog. This paper presents a scheduling algorithm, named "Swirly", for large scale fog and edge networks, which is capable of adapting to changes in network conditions and connected devices. The algorithm details are presented and implemented as a service using the Kubernetes API. This implementation is validated and benchmarked, showing that a single threaded Swirly service is easily capable of managing service meshes for at least 300.000 devices in soft real-time

    Is There Light at the Ends of the Tunnel? Wireless Sensor Networks for Adaptive Lighting in Road Tunnels

    Get PDF
    Existing deployments of wireless sensor networks (WSNs) are often conceived as stand-alone monitoring tools. In this paper, we report instead on a deployment where the WSN is a key component of a closed-loop control system for adaptive lighting in operational road tunnels. WSN nodes along the tunnel walls report light readings to a control station, which closes the loop by setting the intensity of lamps to match a legislated curve. The ability to match dynamically the lighting levels to the actual environmental conditions improves the tunnel safety and reduces its power consumption. The use of WSNs in a closed-loop system, combined with the real-world, harsh setting of operational road tunnels, induces tighter requirements on the quality and timeliness of sensed data, as well as on the reliability and lifetime of the network. In this work, we test to what extent mainstream WSN technology meets these challenges, using a dedicated design that however relies on wellestablished techniques. The paper describes the hw/sw architecture we devised by focusing on the WSN component, and analyzes its performance through experiments in a real, operational tunnel
    corecore