175 research outputs found

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Advanced Technologies for Device-to-device Communications Underlaying Cellular Networks

    Get PDF
    The past few years have seen a major change in cellular networks, as explosive growth in data demands requires more and more network capacity and backhaul capability. New wireless technologies have been proposed to tackle these challenges. One of the emerging technologies is device-to-device (D2D) communications. It enables two cellular user equip- ment (UEs) in proximity to communicate with each other directly reusing cellular radio resources. In this case, D2D is able to of oad data traf c from central base stations (BSs) and signi cantly improve the spectrum ef ciency of a cellular network, and thus is one of the key technologies for the next generation cellular systems. Radio resource management (RRM) for D2D communications and how to effectively exploit the potential bene ts of D2D are two paramount challenges to D2D communications underlaying cellular networks. In this thesis, we focus on four problems related to these two challenges. In Chapter 2, we utilise the mixed integer non-linear programming (MINLP) to model and solve the RRM optimisation problems for D2D communications. Firstly we consider the RRM optimisation problem for D2D communications underlaying the single carrier frequency division multiple access (SC-FDMA) system and devise a heuristic sub- optimal solution to it. Then we propose an optimised RRM mechanism for multi-hop D2D communications with network coding (NC). NC has been proven as an ef cient technique to improve the throughput of ad-hoc networks and thus we apply it to multi-hop D2D communications. We devise an optimal solution to the RRM optimisation problem for multi-hop D2D communications with NC. In Chapter 3, we investigate how the location of the D2D transmitter in a cell may affect the RRM mechanism and the performance of D2D communications. We propose two optimised location-based RRM mechanisms for D2D, which maximise the throughput and the energy ef ciency of D2D, respectively. We show that, by considering the location information of the D2D transmitter, the MINLP problem of RRM for D2D communications can be transformed into a convex optimisation problem, which can be ef ciently solved by the method of Lagrangian multipliers. In Chapter 4, we propose a D2D-based P2P le sharing system, which is called Iunius. The Iunius system features: 1) a wireless P2P protocol based on Bittorrent protocol in the application layer; 2) a simple centralised routing mechanism for multi-hop D2D communications; 3) an interference cancellation technique for conventional cellular (CC) uplink communications; and 4) a radio resource management scheme to mitigate the interference between CC and D2D communications that share the cellular uplink radio resources while maximising the throughput of D2D communications. We show that with the properly designed application layer protocol and the optimised RRM for D2D communications, Iunius can signi cantly improve the quality of experience (QoE) of users and of oad local traf c from the base station. In Chapter 5, we combine LTE-unlicensed with D2D communications. We utilise LTE-unlicensed to enable the operation of D2D in unlicensed bands. We show that not only can this improve the throughput of D2D communications, but also allow D2D to work in the cell central area, which normally regarded as a “forbidden area” for D2D in existing works. We achieve these results mainly through numerical optimisation and simulations. We utilise a wide range of numerical optimisation theories in our works. Instead of utilising the general numerical optimisation algorithms to solve the optimisation problems, we modify them to be suitable for the speci c problems, thereby reducing the computational complexity. Finally, we evaluate our proposed algorithms and systems through sophisticated numer- ical simulations. We have developed a complete system-level simulation framework for D2D communications and we open-source it in Github: https://github.com/mathwuyue/py- wireless-sys-sim

    The 7th Conference of PhD Students in Computer Science

    Get PDF

    Enabling Distributed Applications Optimization in Cloud Environment

    Get PDF
    The past few years have seen dramatic growth in the popularity of public clouds, such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Container-as-a-Service (CaaS). In both commercial and scientific fields, quick environment setup and application deployment become a mandatory requirement. As a result, more and more organizations choose cloud environments instead of setting up the environment by themselves from scratch. The cloud computing resources such as server engines, orchestration, and the underlying server resources are served to the users as a service from a cloud provider. Most of the applications that run in public clouds are the distributed applications, also called multi-tier applications, which require a set of servers, a service ensemble, that cooperate and communicate to jointly provide a certain service or accomplish a task. Moreover, a few research efforts are conducting in providing an overall solution for distributed applications optimization in the public cloud. In this dissertation, we present three systems that enable distributed applications optimization: (1) the first part introduces DocMan, a toolset for detecting containerized application’s dependencies in CaaS clouds, (2) the second part introduces a system to deal with hot/cold blocks in distributed applications, (3) the third part introduces a system named FP4S, a novel fragment-based parallel state recovery mechanism that can handle many simultaneous failures for a large number of concurrently running stream applications

    Smart PIN: performance and cost-oriented context-aware personal information network

    Get PDF
    The next generation of networks will involve interconnection of heterogeneous individual networks such as WPAN, WLAN, WMAN and Cellular network, adopting the IP as common infrastructural protocol and providing virtually always-connected network. Furthermore, there are many devices which enable easy acquisition and storage of information as pictures, movies, emails, etc. Therefore, the information overload and divergent content’s characteristics make it difficult for users to handle their data in manual way. Consequently, there is a need for personalised automatic services which would enable data exchange across heterogeneous network and devices. To support these personalised services, user centric approaches for data delivery across the heterogeneous network are also required. In this context, this thesis proposes Smart PIN - a novel performance and cost-oriented context-aware Personal Information Network. Smart PIN's architecture is detailed including its network, service and management components. Within the service component, two novel schemes for efficient delivery of context and content data are proposed: Multimedia Data Replication Scheme (MDRS) and Quality-oriented Algorithm for Multiple-source Multimedia Delivery (QAMMD). MDRS supports efficient data accessibility among distributed devices using data replication which is based on a utility function and a minimum data set. QAMMD employs a buffer underflow avoidance scheme for streaming, which achieves high multimedia quality without content adaptation to network conditions. Simulation models for MDRS and QAMMD were built which are based on various heterogeneous network scenarios. Additionally a multiple-source streaming based on QAMMS was implemented as a prototype and tested in an emulated network environment. Comparative tests show that MDRS and QAMMD perform significantly better than other approaches

    Virtual Machine Image Management for Elastic Resource Usage in Grid Computing

    Get PDF
    Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data. In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data. The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers. One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system. In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system. A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used. In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized. Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users

    Collaborative streaming of on demand videos for mobile devices

    Get PDF
    The 3G and LTE technologies made video on-demand a popular entertainment for users on the go. However, bandwidth insufficiency is an obstacle in providing high quality and smooth video playout in cellular networks. The objective of the proposed PhD research is to provide a user with high quality video streaming with minimal stalling time by aggregating bandwidth from ubiquitous nearby devices that may be using different radio networks
    • 

    corecore