25 research outputs found

    An intelligent fuzzy logic-based content and channel aware downlink scheduler for scalable video over OFDMA wireless systems

    Get PDF
    The recent advancements of wireless technology and applications make downlink scheduling and resource allocations an important research topic. In this paper, we consider the problem of downlink scheduling for multi-user scalable video streaming over OFDMA channels. The video streams are precoded using a scalable video coding (SVC) scheme. We propose a fuzzy logic-based scheduling algorithm, which prioritises the transmission to different users by considering video content, and channel conditions. Furthermore, a novel analytical model and a new performance metric have been developed for the performance analysis of the proposed scheduling algorithm. The obtained results show that the proposed algorithm outperforms the content-blind/channel aware scheduling algorithms with a gain of as much as 19% in terms of the number of supported users. The proposed algorithm allows for a fairer allocation of resources among users across the entire sector coverage, allowing for the enhancement of video quality at edges of the cell while minimising the degradation of users closer to the base station

    Context-Aware Self-Healing for Small Cell Networks

    Get PDF
    These can be an invaluable source of information for the management of the network, in a way that we have denominated as context-aware SON, which is the approach proposed in this thesis. To develop this concept, the thesis follows a top-down approach. Firstly, the characteristics of the cellular deployments are assessed, especially for indoor small cell networks. In those scenarios, the need for context-aware SON is evaluated and considered indispensable. Secondly, a new cellular architecture is defined to integrate both context information and SON mechanisms in the management plane of the mobile network. Thus, the specifics of making context an integral part of cellular OAM/SON are defined. Also, the real-world implementation of the architecture is proposed. Thirdly, from the established general SON architecture, a logical self-healing framework is defined to support the context-aware healing mechanisms to be developed. Fourthly, different self-healing algorithms are defined depending on the failures to be managed and the conditions of the considered scenario. The mechanisms are based on probabilistic analysis, making use of both context and network data for detection and diagnosis of cellular issues. The conditions for the implementation of these methods are assessed. Their applicability is evaluated by means of simulators and testbed trials. The results show important improvements in performance and capabilities in comparison to previous methods, demonstrating the relevance of the proposed approach.The last years have seen a continuous increase in the use of mobile communications. To cope with the growing traffic, recently deployed technologies have deepened the adoption of small cells (low powered base stations) to serve areas with high demand or coverage issues, where macrocells can be both unsuccessful or inefficient. Also, new cellular and non-cellular technologies (e.g. WiFi) coexist with legacy ones, including also multiple deployment schemes (macrocell, small cells), in what is known as heterogeneous networks (HetNets). Due to the huge complexity of HetNets, their operation, administration and management (OAM) became increasingly difficult. To overcome this, the NGMN Alliance and the 3GPP defined the Self-Organizing Network (SON) paradigm, aiming to automate the OAM procedures to reduce their costs and increase the resulting performance. One key focus of SON is the self-healing of the network, covering the automatic detection of problems, the diagnosis of their causes, their compensation and their recovery. Until recently, SON mechanisms have been solely based on the analysis of alarms and performance indicators. However, on the one hand, this approach has become very limited given the complexity of the scenarios, and particularly in indoor cellular environments. Here, the deployment of small cells, their coexistence with multiple telecommunications systems and the nature of those environments (in terms of propagation, coverage overlapping, fast demand changes and users' mobility) introduce many challenges for classic SON. On the other hand, modern user equipment (e.g. smartphones), equipped with powerful processors, sensors and applications, generate a huge amount of context information. Context refers to those variables not directly associated with the telecommunication service, but with the terminals and their environment. This includes the user's position, applications, social data, etc

    A Survey and Future Directions on Clustering: From WSNs to IoT and Modern Networking Paradigms

    Get PDF
    Many Internet of Things (IoT) networks are created as an overlay over traditional ad-hoc networks such as Zigbee. Moreover, IoT networks can resemble ad-hoc networks over networks that support device-to-device (D2D) communication, e.g., D2D-enabled cellular networks and WiFi-Direct. In these ad-hoc types of IoT networks, efficient topology management is a crucial requirement, and in particular in massive scale deployments. Traditionally, clustering has been recognized as a common approach for topology management in ad-hoc networks, e.g., in Wireless Sensor Networks (WSNs). Topology management in WSNs and ad-hoc IoT networks has many design commonalities as both need to transfer data to the destination hop by hop. Thus, WSN clustering techniques can presumably be applied for topology management in ad-hoc IoT networks. This requires a comprehensive study on WSN clustering techniques and investigating their applicability to ad-hoc IoT networks. In this article, we conduct a survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT, such as network heterogeneity and mobility. Beyond that, we investigate the advantages and challenges of clustering when IoT is integrated with modern computing and communication technologies such as Blockchain, Fog/Edge computing, and 5G. This survey provides useful insights into research on IoT clustering, allows broader understanding of its design challenges for IoT networks, and sheds light on its future applications in modern technologies integrated with IoT.acceptedVersio

    Contribution to reliable end-to-end communication over 5G networks using advanced techniques

    Get PDF
    5G cellular communication, especially with its hugely available bandwidth provided by millimeter-wave, is a promising technology to fulfill the coming high demand for vast data rates. These networks can support new use cases such as Vehicle to Vehicle and augmented reality due to its novel features such as network slicing along with the mmWave multi-gigabit-persecond data rate. Nevertheless, 5G cellular networks suffer from some shortcomings, especially in high frequencies because of the intermittent nature of channels when the frequency rises. Non-line of sight state is one of the significant issues that the new generation encounters. This drawback is because of the intense susceptibility of higher frequencies to blockage caused by obstacles and misalignment. This unique characteristic can impair the performance of the reliable transport layer widely deployed protocol, TCP, in attaining high throughput and low latency throughout a fair network. As a result, the protocol needs to adjust the congestion window size based on the current situation of the network. However, TCP cannot adjust its congestion window efficiently, which leads to throughput degradation of the protocol. This thesis presents a comprehensive analysis of reliable end-to-end communications in 5G networks and analyzes TCP’s behavior in one of the 3GPP’s well-known s cenarios called urban deployment. Furtherm ore, two novel TCPs bas ed on artificial intelligence have been proposed to deal with this issue. The first protocol uses Fuzzy logic, a subset of artificial intelligence, and the second one is based on deep learning. The extensively conducted simulations showed that the newly proposed protocols could attain higher performance than common TCPs, such as BBR, HighSpeed, Cubic, and NewReno in terms of throughput, RTT, and sending rate adjustment in the urban scenario. The new protocols' superiority is achieved by employing smartness in the conges tions control mechanism of TCP, which is a powerful enabler in fos tering TCP’s functionality. To s um up, the 5G network is a promising telecommunication infrastructure that will revolute various aspects of communication. However, different parts of the Internet, such as its regulations and protocol stack, will face new challenges, which need to be solved in order to exploit 5G capacity, and without intelligent rules and protocols, the high bandwidth of 5G, especially 5G mmWave will be wasted. Two novel schemes to solve the issues have been proposed based on an Artificial Intelligence subset technique called fuzzy and a machine learning-based approach called Deep learning to enhance the performance of 5G mmWave by improving the functionality of the transport layer. The obtained results indicated that the new schemes could improve the functionality of TCP by giving intelligence to the protocol. As the protocol works more smartly, it can make sufficient decisions on different conditions.La comunicació cel·lular 5G, especialment amb l’amplada de banda molt disponible que proporciona l’ona mil·limètrica, és una tecnologia prometedora per satisfer l’elevada demanda de grans velocitats de dades. Aquestes xarxes poden admetre casos d’ús nous, com ara Vehicle to Vehicle i realitat augmentada, a causa de les seves novetats, com ara el tall de xarxa juntament amb la velocitat de dades mWave de multi-gigabit per segon. Tot i això, les xarxes cel·lulars 5G pateixen algunes deficiències, sobretot en freqüències altes a causa de la naturalesa intermitent dels canals quan augmenta la freqüència. L’estat de no visió és un dels problemes significatius que troba la nova generació. Aquest inconvenient es deu a la intensa susceptibilitat de freqüències més altes al bloqueig causat per obstacles i desalineació. Aquesta característica única pot perjudicar el rendiment del protocol TCP, àmpliament desplegat, de capa de transport fiable en aconseguir un alt rendiment i una latència baixa en tota una xarxa justa. Com a resultat, el protocol ha d’ajustar la mida de la finestra de congestió en funció de la situació actual de la xarxa. Tot i això, TCP no pot ajustar la seva finestra de congestió de manera eficient, cosa que provoca una degradació del rendiment del protocol. Aquesta tesi presenta una anàlisi completa de comunicacions extrem a extrem en xarxes 5G i analitza el comportament de TCP en un dels escenaris coneguts del 3GPP anomenat desplegament urbà. A més, s'han proposat dos TCP nous basats en intel·ligència artificial per tractar aquest tema. El primer protocol utilitza la lògica Fuzzy, un subconjunt d’intel·ligència artificial, i el segon es basa en l’aprenentatge profund. Les simulacions àmpliament realitzades van mostrar que els protocols proposats recentment podrien assolir un rendiment superior als TCP habituals, com ara BBR, HighSpeed, Cubic i NewReno, en termes de rendiment, RTT i ajust d’índex d’enviament en l’escenari urbà. La superioritat dels nous protocols s’aconsegueix utilitzant la intel·ligència en el mecanisme de control de congestions de TCP, que és un poderós facilitador per fomentar la funcionalitat de TCP. En resum, la xarxa 5G és una prometedora infraestructura de telecomunicacions que revolucionarà diversos aspectes de la comunicació. No obstant això, diferents parts d’Internet, com ara les seves regulacions i la seva pila de protocols, s’enfrontaran a nous reptes, que cal resoldre per explotar la capacitat 5G, i sens regles i protocols intel·ligents, l’amplada de banda elevada de 5G, especialment 5G mmWave, pot ser desaprofitat. S'han proposat dos nous es quemes per resoldre els problemes basats en una tècnica de subconjunt d'Intel·ligència Artificial anomenada “difusa” i un enfocament basat en l'aprenentatge automàtic anomenat “Aprenentatge profund” per millorar el rendiment de 5G mmWave, millorant la funcionalitat de la capa de transport. Els resultats obtinguts van indicar que els nous esquemes podrien millorar la funcionalitat de TCP donant intel·ligència al protocol. Com que el protocol funciona de manera més intel·ligent, pot prendre decisions suficients en diferents condicionsPostprint (published version

    Context-aware Self-Optimization in Small-Cell Networks

    Get PDF
    Most mobile communications take place at indoor environments, especially in commercial and corporate scenarios. These places normally present coverage and capacity issues due to the poor signal quality, which degrade the end-user Quality of Experience (QoE). In these cases, mobile operators are offering small cells to overcome the indoor issues, being femtocells the main deployed base stations. Femtocell networks provide significant benefits to mobile operators and their clients. However, the massive integration and the particularities of femtocells, make the maintenance of these infrastructures a challenge for engineers. In this sense, Self-Organizing Networks (SON) techniques play an important role. These techniques are a key feature to intelligently automate network operation, administration and management procedures. SON mechanisms are based on the analysis of the mobile network alarms, counters and indicators. In parallel, electronics, sensors and software applications evolve rapidly and are everywhere. Thanks to this, valuable context information can be gathered, which properly managed can improve SON techniques performance. Within possible context data, one of the most active topics is the indoor positioning due to the immediate interest on indoor location-based services (LBS). At indoor commercial and corporate environments, user densities and traffic vary in spatial and temporal domain. These situations lead to degrade cellular network performance, being temporary traffic fluctuations and focused congestions one of the most common issues. Load balancing techniques, which have been identified as a use case in self-optimization paradigm for Long Term Evolution (LTE), can alleviate these congestion problems. This use case has been widely studied in macrocellular networks and outdoor scenarios. However, the particularities of femtocells, the characteristics of indoor scenarios and the influence of users’ mobility pattern justify the development of new solutions. The goal of this PhD thesis is to design and develop novel and automatic solutions for temporary traffic fluctuations and focused network congestion issues in commercial and corporate femtocell environments. For that purpose, the implementation of an efficient management architecture to integrate context data into the mobile network and SON mechanisms is required. Afterwards, an accurate indoor positioning system is developed, as a possible inexpensive solution for context-aware SON. Finally, advanced self-optimization methods to shift users from overloaded cells to other cells with spare resources are designed. These methods tune femtocell configuration parameters based on network information, such as ratio of active users, and context information, such as users’ position. All these methods are evaluated in both a dynamic LTE system-level simulator and in a field-trial

    An intelligent network selection mechanism for vertical handover decision in vehicular Ad Hoc wireless networks

    Get PDF
    The design of the Vehicular Ad-hoc Network (VANET) technology is a modern paradigm for vehicular communication on movement. However, VANET's vertical handover (VHO) decision in seamless connectivity is a huge challenge caused by the network topology complexity and the large number of mobile nodes that affect the network traffic in terms of the data transmission and dissemination efficiency. Furthermore, the conventional scheme only uses a received signal strength as a metric value, which shows a lack of appropriate handover metrics that is more suitable in horizontal handover compared to VHO. Appropriate VHO decisions will result in an increase in the network quality of service (QoS) in terms of delay, latency, and packet loss. This study aims to design an intelligent network selection to minimize the handover delay and latency, and packet loss in the heterogeneous Vehicle-to- Infrastructure (V2I) wireless networks. The proposed intelligent network selection is known as the Adaptive Handover Decision (AHD) scheme that uses Fuzzy Logic (FL) and Simple Additive Weighting (SAW) algorithms, namely F-SAW scheme. The AHD scheme was designed to select the best-qualified access point (AP) and base station (BS) candidates without degrading the performance of ongoing applications. The F-SAW scheme is proposed to develop a handover triggering mechanism that generates multiple attributes parameters using the information context of vertical handover decision in the V2I heterogeneous wireless networks. This study uses a network simulator (NS-2) as the mobility traffic network and vehicular mobility traffic (VANETMobiSim) generator to implement a topology in a realistic VANET mobility scenario in Wi-Fi, WiMAX, and LTE networks technologies. The proposed AHD scheme shows an improvement in the QoS handover over the conventional (RSS-based) scheme with an average QoS increased of 21%, 20%, and 13% in delay, latency and packet loss, while Media Independent Handover based (MIH-based) scheme with 12.2%, 11%, and 7% respectively. The proposed scheme assists the mobile user in selecting the best available APs or BS during the vehicles’ movement without degrading the performance of ongoing applications

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i
    corecore