1,195 research outputs found

    Clustering algorithm for D2D communication in next generation cellular networks : thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering, Massey University, Auckland, New Zealand

    Get PDF
    Next generation cellular networks will support many complex services for smartphones, vehicles, and other devices. To accommodate such services, cellular networks need to go beyond the capabilities of their previous generations. Device-to-Device communication (D2D) is a key technology that can help fulfil some of the requirements of future networks. The telecommunication industry expects a significant increase in the density of mobile devices which puts more pressure on centralized schemes and poses risk in terms of outages, poor spectral efficiencies, and low data rates. Recent studies have shown that a large part of the cellular traffic pertains to sharing popular contents. This highlights the need for decentralized and distributive approaches to managing multimedia traffic. Content-sharing via D2D clustered networks has emerged as a popular approach for alleviating the burden on the cellular network. Different studies have established that D2D communication in clusters can improve spectral and energy efficiency, achieve low latency while increasing the capacity of the network. To achieve effective content-sharing among users, appropriate clustering strategies are required. Therefore, the aim is to design and compare clustering approaches for D2D communication targeting content-sharing applications. Currently, most of researched and implemented clustering schemes are centralized or predominantly dependent on Evolved Node B (eNB). This thesis proposes a distributed architecture that supports clustering approaches to incorporate multimedia traffic. A content-sharing network is presented where some D2D User Equipment (DUE) function as content distributors for nearby devices. Two promising techniques are utilized, namely, Content-Centric Networking and Network Virtualization, to propose a distributed architecture, that supports efficient content delivery. We propose to use clustering at the user level for content-distribution. A weighted multi-factor clustering algorithm is proposed for grouping the DUEs sharing a common interest. Various performance parameters such as energy consumption, area spectral efficiency, and throughput have been considered for evaluating the proposed algorithm. The effect of number of clusters on the performance parameters is also discussed. The proposed algorithm has been further modified to allow for a trade-off between fairness and other performance parameters. A comprehensive simulation study is presented that demonstrates that the proposed clustering algorithm is more flexible and outperforms several well-known and state-of-the-art algorithms. The clustering process is subsequently evaluated from an individual user’s perspective for further performance improvement. We believe that some users, sharing common interests, are better off with the eNB rather than being in the clusters. We utilize machine learning algorithms namely, Deep Neural Network, Random Forest, and Support Vector Machine, to identify the users that are better served by the eNB and form clusters for the rest of the users. This proposed user segregation scheme can be used in conjunction with most clustering algorithms including the proposed multi-factor scheme. A comprehensive simulation study demonstrates that with such novel user segregation, the performance of individual users, as well as the whole network, can be significantly improved for throughput, energy consumption, and fairness

    Quality of Experience monitoring and management strategies for future smart networks

    Get PDF
    One of the major driving forces of the service and network's provider market is the user's perceived service quality and expectations, which are referred to as user's Quality of Experience (QoE). It is evident that QoE is particularly critical for network providers, who are challenged with the multimedia engineering problems (e.g. processing, compression) typical of traditional networks. They need to have the right QoE monitoring and management mechanisms to have a significant impact on their budget (e.g. by reducing the users‘ churn). Moreover, due to the rapid growth of mobile networks and multimedia services, it is crucial for Internet Service Providers (ISPs) to accurately monitor and manage the QoE for the delivered services and at the same time keep the computational resources and the power consumption at low levels. The objective of this thesis is to investigate the issue of QoE monitoring and management for future networks. This research, developed during the PhD programme, aims to describe the State-of-the-Art and the concept of Virtual Probes (vProbes). Then, I proposed a QoE monitoring and management solution, two Agent-based solutions for QoE monitoring in LTE-Advanced networks, a QoE monitoring solution for multimedia services in 5G networks and an SDN-based approach for QoE management of multimedia services

    Edge Computing for Extreme Reliability and Scalability

    Get PDF
    The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud

    Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models

    Get PDF
    Digital Twin is an emerging concept that is gaining attention in various industries. It refers to the ability to clone a physical object into a software counterpart. The softwarized object, termed logical object, reflects all the important properties and characteristics of the original object within a specific application context. To fully determine the expected properties of the Digital Twin, this paper surveys the state of the art starting from the original definition within the manufacturing industry. It takes into account related proposals emerging in other fields, namely, Augmented and Virtual Reality (e.g., avatars), Multi-agent systems, and virtualization. This survey thereby allows for the identification of an extensive set of Digital Twin features that point to the “softwarization” of physical objects. To properly consolidate a shared Digital Twin definition, a set of foundational properties is identified and proposed as a common ground outlining the essential characteristics (must-haves) of a Digital Twin. Once the Digital Twin definition has been consolidated, its technical and business value is discussed in terms of applicability and opportunities. Four application scenarios illustrate how the Digital Twin concept can be used and how some industries are applying it. The scenarios also lead to a generic DT architectural Model. This analysis is then complemented by the identification of software architecture models and guidelines in order to present a general functional framework for the Digital Twin. The paper, eventually, analyses a set of possible evolution paths for the Digital Twin considering its possible usage as a major enabler for the softwarization process
    • 

    corecore