275 research outputs found

    EVEREST IST - 2002 - 00185 : D23 : final report

    Get PDF
    Deliverable públic del projecte europeu EVERESTThis deliverable constitutes the final report of the project IST-2002-001858 EVEREST. After its successful completion, the project presents this document that firstly summarizes the context, goal and the approach objective of the project. Then it presents a concise summary of the major goals and results, as well as highlights the most valuable lessons derived form the project work. A list of deliverables and publications is included in the annex.Postprint (published version

    How far can we go? Towards Realistic Software-Defined Wireless Networking Experiments

    Get PDF
    International audienceSoftware-Defined Wireless Networking (SDWN) is an emerging approach based on decoupling radio control functions from the radio data plane through programmatic interfaces. Despite diverse ongoing efforts to realize the vision of SDWN, many questions remain open from multiple perspectives such as means to rapid prototype and experiment candidate software solutions applicable to real world deployments. To this end, emulation of SDWN has the potential to boost research and development efforts by re-using existing protocol and application stacks while mimicking the behavior of real wireless networks. In this article, we provide an in-depth discussion on that matter focusing on the Mininet-WiFi emulator design to fill a gap in the experimental platform space. We showcase the applicability of our emulator in an SDN wireless context by illustrating the support of a number of use cases aiming to address the question on how far we can go in realistic SDWN experiments, including comparisons to the results obtained in a wireless testbed. Finally, we discuss the ability to replay packet-level and radio signal traces captured in the real testbed towards a virtual yet realistic emulation environment in support of SDWN research

    On the use of Hidden Markov Processes and auto-regressive filters to incorporate indoor bursty wireless channels into network simulation platforms

    Get PDF
    In this paper we thoroughly analyze two alternatives to replicate the bursty behavior that characterizes real indoor wireless channels within Network Simulation platforms. First, we study the performance of an improved Hidden Markov Process model, based on a time-wise configuration so as to decouple its operation from any particular traffic pattern. We compare it with the behavior of Bursty Error Model Based on an Auto-Regressive Filter, a previous proposal of ours that emulates the received Signal to Noise Ratio by means of an auto-regressive filter that captures the “memory” assessed in real measurements. We also study the performance of one of the legacy approaches intrinsically offered by most network simulation frameworks. By means of a thorough simulation campaign, we demonstrate that our two models are able to offer a much more realistic behavior, yet maintaining an affordable response in terms of computational complexity.The authors would like to express their gratitude to the Spanish government for its funding in the project “Connectivity as a Service: Access for the Internet of the Future”, COSAIF (TEC2012-38574-C02-01

    D2.1 Performance evaluation framework

    Full text link
    This deliverable contains a proposal for a performance evaluation framework that aims at ensuring that multiple projects within 5G-PPP wireless strand can quantitatively assess and compare the performance of different 5G RAN design concepts. The report collects the vision of several 5G-PPP projects and is conceived as a living document to be further elaborated along with the 5G-PPP framework workshops planned during 2016.Weber, A.; Agyapong, P.; Rosowski, T.; Zimmerman, G.; Fallgren, M.; Sharma, S.; Kousaridas, A.... (2016). D2.1 Performance evaluation framework. https://doi.org/10.13140/RG.2.2.35447.2192

    Addressing training data sparsity and interpretability challenges in AI based cellular networks

    Get PDF
    To meet the diverse and stringent communication requirements for emerging networks use cases, zero-touch arti cial intelligence (AI) based deep automation in cellular networks is envisioned. However, the full potential of AI in cellular networks remains hindered by two key challenges: (i) training data is not as freely available in cellular networks as in other fields where AI has made a profound impact and (ii) current AI models tend to have black box behavior making operators reluctant to entrust the operation of multibillion mission critical networks to a black box AI engine, which allow little insights and discovery of relationships between the configuration and optimization parameters and key performance indicators. This dissertation systematically addresses and proposes solutions to these two key problems faced by emerging networks. A framework towards addressing the training data sparsity challenge in cellular networks is developed, that can assist network operators and researchers in choosing the optimal data enrichment technique for different network scenarios, based on the available information. The framework encompasses classical interpolation techniques, like inverse distance weighted and kriging to more advanced ML-based methods, like transfer learning and generative adversarial networks, several new techniques, such as matrix completion theory and leveraging different types of network geometries, and simulators and testbeds, among others. The proposed framework will lead to more accurate ML models, that rely on sufficient amount of representative training data. Moreover, solutions are proposed to address the data sparsity challenge specifically in Minimization of drive test (MDT) based automation approaches. MDT allows coverage to be estimated at the base station by exploiting measurement reports gathered by the user equipment without the need for drive tests. Thus, MDT is a key enabling feature for data and artificial intelligence driven autonomous operation and optimization in current and emerging cellular networks. However, to date, the utility of MDT feature remains thwarted by issues such as sparsity of user reports and user positioning inaccuracy. For the first time, this dissertation reveals the existence of an optimal bin width for coverage estimation in the presence of inaccurate user positioning, scarcity of user reports and quantization error. The presented framework can enable network operators to configure the bin size for given positioning accuracy and user density that results in the most accurate MDT based coverage estimation. The lack of interpretability in AI-enabled networks is addressed by proposing a first of its kind novel neural network architecture leveraging analytical modeling, domain knowledge, big data and machine learning to turn black box machine learning models into more interpretable models. The proposed approach combines analytical modeling and domain knowledge to custom design machine learning models with the aim of moving towards interpretable machine learning models, that not only require a lesser training time, but can also deal with issues such as sparsity of training data and determination of model hyperparameters. The approach is tested using both simulated data and real data and results show that the proposed approach outperforms existing mathematical models, while also remaining interpretable when compared with black-box ML models. Thus, the proposed approach can be used to derive better mathematical models of complex systems. The findings from this dissertation can help solve the challenges in emerging AI-based cellular networks and thus aid in their design, operation and optimization

    Real-Time Waveform Prototyping

    Get PDF
    Mobile Netzwerke der fünften Generation zeichen sich aus durch vielfältigen Anforderungen und Einsatzszenarien. Drei unterschiedliche Anwendungsfälle sind hierbei besonders relevant: 1) Industrie-Applikationen fordern Echtzeitfunkübertragungen mit besonders niedrigen Ausfallraten. 2) Internet-of-things-Anwendungen erfordern die Anbindung einer Vielzahl von verteilten Sensoren. 3) Die Datenraten für Anwendung wie z.B. der Übermittlung von Videoinhalten sind massiv gestiegen. Diese zum Teil gegensätzlichen Erwartungen veranlassen Forscher und Ingenieure dazu, neue Konzepte und Technologien für zukünftige drahtlose Kommunikationssysteme in Betracht zu ziehen. Ziel ist es, aus einer Vielzahl neuer Ideen vielversprechende Kandidatentechnologien zu identifizieren und zu entscheiden, welche für die Umsetzung in zukünftige Produkte geeignet sind. Die Herausforderungen, diese Anforderungen zu erreichen, liegen jedoch jenseits der Möglichkeiten, die eine einzelne Verarbeitungsschicht in einem drahtlosen Netzwerk bieten kann. Daher müssen mehrere Forschungsbereiche Forschungsideen gemeinsam nutzen. Diese Arbeit beschreibt daher eine Plattform als Basis für zukünftige experimentelle Erforschung von drahtlosen Netzwerken unter reellen Bedingungen. Es werden folgende drei Aspekte näher vorgestellt: Zunächst erfolgt ein Überblick über moderne Prototypen und Testbed-Lösungen, die auf großes Interesse, Nachfrage, aber auch Förderungsmöglichkeiten stoßen. Allerdings ist der Entwicklungsaufwand nicht unerheblich und richtet sich stark nach den gewählten Eigenschaften der Plattform. Der Auswahlprozess ist jedoch aufgrund der Menge der verfügbaren Optionen und ihrer jeweiligen (versteckten) Implikationen komplex. Daher wird ein Leitfaden anhand verschiedener Beispiele vorgestellt, mit dem Ziel Erwartungen im Vergleich zu den für den Prototyp erforderlichen Aufwänden zu bewerten. Zweitens wird ein flexibler, aber echtzeitfähiger Signalprozessor eingeführt, der auf einer software-programmierbaren Funkplattform läuft. Der Prozessor ermöglicht die Rekonfiguration wichtiger Parameter der physikalischen Schicht während der Laufzeit, um eine Vielzahl moderner Wellenformen zu erzeugen. Es werden vier Parametereinstellungen 'LLC', 'WiFi', 'eMBB' und 'IoT' vorgestellt, um die Anforderungen der verschiedenen drahtlosen Anwendungen widerzuspiegeln. Diese werden dann zur Evaluierung der die in dieser Arbeit vorgestellte Implementierung herangezogen. Drittens wird durch die Einführung einer generischen Testinfrastruktur die Einbeziehung externer Partner aus der Ferne ermöglicht. Das Testfeld kann hier für verschiedenste Experimente flexibel auf die Anforderungen drahtloser Technologien zugeschnitten werden. Mit Hilfe der Testinfrastruktur wird die Leistung des vorgestellten Transceivers hinsichtlich Latenz, erreichbarem Durchsatz und Paketfehlerraten bewertet. Die öffentliche Demonstration eines taktilen Internet-Prototypen, unter Verwendung von Roboterarmen in einer Mehrbenutzerumgebung, konnte erfolgreich durchgeführt und bei mehreren Gelegenheiten präsentiert werden.:List of figures List of tables Abbreviations Notations 1 Introduction 1.1 Wireless applications 1.2 Motivation 1.3 Software-Defined Radio 1.4 State of the art 1.5 Testbed 1.6 Summary 2 Background 2.1 System Model 2.2 PHY Layer Structure 2.3 Generalized Frequency Division Multiplexing 2.4 Wireless Standards 2.4.1 IEEE 802.15.4 2.4.2 802.11 WLAN 2.4.3 LTE 2.4.4 Low Latency Industrial Wireless Communications 2.4.5 Summary 3 Wireless Prototyping 3.1 Testbed Examples 3.1.1 PHY - focused Testbeds 3.1.2 MAC - focused Testbeds 3.1.3 Network - focused testbeds 3.1.4 Generic testbeds 3.2 Considerations 3.3 Use cases and Scenarios 3.4 Requirements 3.5 Methodology 3.6 Hardware Platform 3.6.1 Host 3.6.2 FPGA 3.6.3 Hybrid 3.6.4 ASIC 3.7 Software Platform 3.7.1 Testbed Management Frameworks 3.7.2 Development Frameworks 3.7.3 Software Implementations 3.8 Deployment 3.9 Discussion 3.10 Conclusion 4 Flexible Transceiver 4.1 Signal Processing Modules 4.1.1 MAC interface 4.1.2 Encoding and Mapping 4.1.3 Modem 4.1.4 Post modem processing 4.1.5 Synchronization 4.1.6 Channel Estimation and Equalization 4.1.7 Demapping 4.1.8 Flexible Configuration 4.2 Analysis 4.2.1 Numerical Precision 4.2.2 Spectral analysis 4.2.3 Latency 4.2.4 Resource Consumption 4.3 Discussion 4.3.1 Extension to MIMO 4.4 Summary 5 Testbed 5.1 Infrastructure 5.2 Automation 5.3 Software Defined Radio Platform 5.4 Radio Frequency Front-end 5.4.1 Sub 6 GHz front-end 5.4.2 26 GHz mmWave front-end 5.5 Performance evaluation 5.6 Summary 6 Experiments 6.1 Single Link 6.1.1 Infrastructure 6.1.2 Single Link Experiments 6.1.3 End-to-End 6.2 Multi-User 6.3 26 GHz mmWave experimentation 6.4 Summary 7 Key lessons 7.1 Limitations Experienced During Development 7.2 Prototyping Future 7.3 Open points 7.4 Workflow 7.5 Summary 8 Conclusions 8.1 Future Work 8.1.1 Prototyping Workflow 8.1.2 Flexible Transceiver Core 8.1.3 Experimental Data-sets 8.1.4 Evolved Access Point Prototype For Industrial Networks 8.1.5 Testbed Standardization A Additional Resources A.1 Fourier Transform Blocks A.2 Resource Consumption A.3 Channel Sounding using Chirp sequences A.3.1 SNR Estimation A.3.2 Channel Estimation A.4 Hardware part listThe demand to achieve higher data rates for the Enhanced Mobile Broadband scenario and novel fifth generation use cases like Ultra-Reliable Low-Latency and Massive Machine-type Communications drive researchers and engineers to consider new concepts and technologies for future wireless communication systems. The goal is to identify promising candidate technologies among a vast number of new ideas and to decide, which are suitable for implementation in future products. However, the challenges to achieve those demands are beyond the capabilities a single processing layer in a wireless network can offer. Therefore, several research domains have to collaboratively exploit research ideas. This thesis presents a platform to provide a base for future applied research on wireless networks. Firstly, by giving an overview of state-of-the-art prototypes and testbed solutions. Secondly by introducing a flexible, yet real-time physical layer signal processor running on a software defined radio platform. The processor enables reconfiguring important parameters of the physical layer during run-time in order to create a multitude of modern waveforms. Thirdly, by introducing a generic test infrastructure, which can be tailored to prototype diverse wireless technology and which is remotely accessible in order to invite new ideas by third parties. Using the test infrastructure, the performance of the flexible transceiver is evaluated regarding latency, achievable throughput and packet error rates.:List of figures List of tables Abbreviations Notations 1 Introduction 1.1 Wireless applications 1.2 Motivation 1.3 Software-Defined Radio 1.4 State of the art 1.5 Testbed 1.6 Summary 2 Background 2.1 System Model 2.2 PHY Layer Structure 2.3 Generalized Frequency Division Multiplexing 2.4 Wireless Standards 2.4.1 IEEE 802.15.4 2.4.2 802.11 WLAN 2.4.3 LTE 2.4.4 Low Latency Industrial Wireless Communications 2.4.5 Summary 3 Wireless Prototyping 3.1 Testbed Examples 3.1.1 PHY - focused Testbeds 3.1.2 MAC - focused Testbeds 3.1.3 Network - focused testbeds 3.1.4 Generic testbeds 3.2 Considerations 3.3 Use cases and Scenarios 3.4 Requirements 3.5 Methodology 3.6 Hardware Platform 3.6.1 Host 3.6.2 FPGA 3.6.3 Hybrid 3.6.4 ASIC 3.7 Software Platform 3.7.1 Testbed Management Frameworks 3.7.2 Development Frameworks 3.7.3 Software Implementations 3.8 Deployment 3.9 Discussion 3.10 Conclusion 4 Flexible Transceiver 4.1 Signal Processing Modules 4.1.1 MAC interface 4.1.2 Encoding and Mapping 4.1.3 Modem 4.1.4 Post modem processing 4.1.5 Synchronization 4.1.6 Channel Estimation and Equalization 4.1.7 Demapping 4.1.8 Flexible Configuration 4.2 Analysis 4.2.1 Numerical Precision 4.2.2 Spectral analysis 4.2.3 Latency 4.2.4 Resource Consumption 4.3 Discussion 4.3.1 Extension to MIMO 4.4 Summary 5 Testbed 5.1 Infrastructure 5.2 Automation 5.3 Software Defined Radio Platform 5.4 Radio Frequency Front-end 5.4.1 Sub 6 GHz front-end 5.4.2 26 GHz mmWave front-end 5.5 Performance evaluation 5.6 Summary 6 Experiments 6.1 Single Link 6.1.1 Infrastructure 6.1.2 Single Link Experiments 6.1.3 End-to-End 6.2 Multi-User 6.3 26 GHz mmWave experimentation 6.4 Summary 7 Key lessons 7.1 Limitations Experienced During Development 7.2 Prototyping Future 7.3 Open points 7.4 Workflow 7.5 Summary 8 Conclusions 8.1 Future Work 8.1.1 Prototyping Workflow 8.1.2 Flexible Transceiver Core 8.1.3 Experimental Data-sets 8.1.4 Evolved Access Point Prototype For Industrial Networks 8.1.5 Testbed Standardization A Additional Resources A.1 Fourier Transform Blocks A.2 Resource Consumption A.3 Channel Sounding using Chirp sequences A.3.1 SNR Estimation A.3.2 Channel Estimation A.4 Hardware part lis

    Design and evaluation of a self-configuring wireless mesh network architecture

    Get PDF
    Wireless network connectivity plays an increasingly important role in supporting our everyday private and professional lives. For over three decades, self-organizing wireless multi-hop ad-hoc networks have been investigated as a decentralized replacement for the traditional forms of wireless networks that rely on a wired infrastructure. However, despite the tremendous efforts of the international wireless research community and widespread availability of devices that are able to support these networks, wireless ad-hoc networks are hardly ever used. In this work, the reasons behind this discrepancy are investigated. It is found that several basic theoretical assumptions on ad-hoc networks prove to be wrong when solutions are deployed in reality, and that several basic functionalities are still missing. It is argued that a hierarchical wireless mesh network architecture, in which specialized, multi-interfaced mesh nodes form a reliable multi-hop wireless backbone for the less capable end-user clients is an essential step in bringing the ad-hoc networking concept one step closer to reality. Therefore, in a second part of this work, algorithms increasing the reliability and supporting the deployment and management of these wireless mesh networks are developed, implemented and evaluated, while keeping the observed limitations and practical considerations in mind. Furthermore, the feasibility of the algorithms is verified by experiment. The performance analysis of these protocols and the ability to deploy the developed algorithms on current generation off-the-shelf hardware indicates the successfulness of the followed research approach, which combines theoretical considerations with practical implementations and observations. However, it was found that there are also many pitfalls to using real-life implementation as a research technique. Therefore, in the last part of this work, a methodology for wireless network research using real-life implementation is developed, allowing researchers to generate more reliable protocols and performance analysis results with less effort

    Context-Aware Self-Healing for Small Cell Networks

    Get PDF
    These can be an invaluable source of information for the management of the network, in a way that we have denominated as context-aware SON, which is the approach proposed in this thesis. To develop this concept, the thesis follows a top-down approach. Firstly, the characteristics of the cellular deployments are assessed, especially for indoor small cell networks. In those scenarios, the need for context-aware SON is evaluated and considered indispensable. Secondly, a new cellular architecture is defined to integrate both context information and SON mechanisms in the management plane of the mobile network. Thus, the specifics of making context an integral part of cellular OAM/SON are defined. Also, the real-world implementation of the architecture is proposed. Thirdly, from the established general SON architecture, a logical self-healing framework is defined to support the context-aware healing mechanisms to be developed. Fourthly, different self-healing algorithms are defined depending on the failures to be managed and the conditions of the considered scenario. The mechanisms are based on probabilistic analysis, making use of both context and network data for detection and diagnosis of cellular issues. The conditions for the implementation of these methods are assessed. Their applicability is evaluated by means of simulators and testbed trials. The results show important improvements in performance and capabilities in comparison to previous methods, demonstrating the relevance of the proposed approach.The last years have seen a continuous increase in the use of mobile communications. To cope with the growing traffic, recently deployed technologies have deepened the adoption of small cells (low powered base stations) to serve areas with high demand or coverage issues, where macrocells can be both unsuccessful or inefficient. Also, new cellular and non-cellular technologies (e.g. WiFi) coexist with legacy ones, including also multiple deployment schemes (macrocell, small cells), in what is known as heterogeneous networks (HetNets). Due to the huge complexity of HetNets, their operation, administration and management (OAM) became increasingly difficult. To overcome this, the NGMN Alliance and the 3GPP defined the Self-Organizing Network (SON) paradigm, aiming to automate the OAM procedures to reduce their costs and increase the resulting performance. One key focus of SON is the self-healing of the network, covering the automatic detection of problems, the diagnosis of their causes, their compensation and their recovery. Until recently, SON mechanisms have been solely based on the analysis of alarms and performance indicators. However, on the one hand, this approach has become very limited given the complexity of the scenarios, and particularly in indoor cellular environments. Here, the deployment of small cells, their coexistence with multiple telecommunications systems and the nature of those environments (in terms of propagation, coverage overlapping, fast demand changes and users' mobility) introduce many challenges for classic SON. On the other hand, modern user equipment (e.g. smartphones), equipped with powerful processors, sensors and applications, generate a huge amount of context information. Context refers to those variables not directly associated with the telecommunication service, but with the terminals and their environment. This includes the user's position, applications, social data, etc

    Adaptação da taxa de transmissão em redes veiculares

    Get PDF
    Mestrado em Engenharia Eletrónica e TelecomunicaçõesAo longo dos últimos anos, vários progressos em comunicações sem fios têm extendido investigações a novas áreas, onde soluções baseadas em redes com fios são impraticáveis. Neste contexto apareceram as Vehicular Ad hoc NETworks (VANETs), uma classe emergente das redes Ad Hoc, para interligação e comunicação entre veículos. Devido ás suas características peculiares como alta mobilidade, topologia dinâmica, frequente perda de conectividade, as VANETs enfrentam vários desafios para definir protolocos e mecanismos fiáveis, como a adaptação da taxa de transmissão. De facto, a monitorização do tráfego das ruas através de aplicações são o núcleo das VANETs cujo desempenho depende da taxa de envio de pacotes e da taxa de sucesso que estas redes conseguem oferecer. Mecanismos de adaptação da taxa de transmissão têm como objetivo evitar a degradação do desempenho da rede devido a uma escolha muito elevada da taxa de transmissão, quando a qualidade do canal está deteorada, ou devido à utilização de uma taxa muito baixa quando as condições da qualidade do canal melhoram. Uma vez que os dispositivos que operam segundo a norma IEEE 802.11p suportam várias taxas de transmissão, é importante que estes possam adaptar a taxa de forma dinâmica de modo a obter um desempenho elevado. Assim é essencial ter um mecanismo de adaptação da taxa de transmissão que seja robusto e capaz de lidar com elevadas flutuações e assimetrias do canal, transmissões em rajada e de duração inconstante, e perda de pacotes devido ás condições do meio e à existência de terminais escondidos. Assim sendo, esta dissertação permite avaliar e comparar os mecanismos existentes para redes sem fios, em ambientes veiculares usando o Network Simulator 3 (NS-3) e o Simulator of Urban Mobility (SUMO). Depois de analisar os principais mecanismos presentes na literatura, foram selecionados quatro para serem testados: Adaptive Auto Rate Fall Back-Collision Detection (AARF-CD), Collision-Aware Rate Adaptation (CARA), Minstrel e o Ideal. Serão considerados dois tipos de cenários: auto-estrada e urbano. A comparação dos algoritmos será baseada em métricas conhecidas como a taxa de envio de pacotes, taxa de sucesso e a percentagem de retransmissões para vários níveis de transmissão. Os resultados experimentais mostraram que o AARFCD atingiu um desempenho superior, quando comparado com os restantes algoritmos. O CARA foi o segundo melhor algoritmo segundo as métricas consideradas. De realçar que o AARF-CD obteve uma taxa de sucesso superior ao do CARA, apesar deste oferecer uma taxa de envio de pacotes superior em certos cenários. Em relação ao atraso na rede, tanto o AARFCD como o CARA alcançaram resultados similares. Foi também concluído que algoritmos com diferenciação de perdas de pacotes como o AARF-CD e o CARA oferecem uma melhor adaptação da taxa de transmissão. Por fim, é sugerido um algoritmo de adatação da taxa de transmissão que tem em conta parâmetros externos, como a velocidade, distância e a densidade de veículos. Cada parâmetro é considerado de acordo com a sua influência na transmissão de dados através de pesos. Desta forma os parâmetros que afetam mais a adaptação da taxa de transmissão serão associados a pesos maiores. A adaptação da taxa de transmissão será baseada num processo de pesos, de acordo com o efeito das condições exteriores no desempenho da rede.Over the last years, several progresses in wireless communications have extended research in new sub-areas, where wired solutions are impracticable. In this context, VANETs arose as an emerging area of wireless ad hoc networks, which connect and allow communication between vehicles. Due to its peculiar characteristics such as high mobility, dynamic topology and frequent loss of connectivity, VANETs face many challenges to de ne reliable protocols and mechanisms like rate adaptation schemes. Indeed tra c querying and road sensing applications are the core of VANETs whose performance depends on the throughput and the success ratio these networks can provide. Rate adaptation mechanisms aim to avoid performance network degradation due to rate over-selection when channel quality is deteriorated or rate under-selection when channel quality improves. Since IEEE 802.11p supports multi-rate capabilities, devices must adapt their transmission rate dynamically in order to achieve a high performance. Thus it is critical to have a robust rate adaptation mechanism that can deal with high uctuation and asymmetry of channels, bursty and infrequent duration transmissions, and loss packet own to the extreme environment conditions or hidden terminals. Thereby, this dissertation evaluates and compares the existing rate adaptation mechanisms for wireless in vehicular environments, using NS-3 and SUMO. Four mechanisms: AARF-CD, CARA, Minstrel and Ideal were selected to be compared, after analysing the main mechanisms across literature. It will be considered two types of scenarios: highway and urban scenario. The comparison between the algorithms will be based on known metrics: network throughput, success ratio, delay and percentage of retransmissions. Experimentation results showed that AARF-CD achieved higher performance when compared with the remaining algorithms in both scenarios. CARA was the second best algorithm, considering the same metrics. Although CARA provides higher throughput in certain scenarios, it is outperformed by AARF-CD in terms of rate success. Regarding delay, AARF-CD and CARA attained similar results. It was also concluded that algorithms with loss di erentiation such as AARF-CD and CARA provide better rate adaptation. Finally, it is suggested a rate adaptation algorithm which considers external parameters like velocity, distance and density of nodes. Each parameter is considered according to its impact in the data transmission through weights. Parameters that a ect more the rate adaptation are associated to larger weights. Thus, the rate adaptation is based on a weighted process according to the e ect of external conditions in the network performance
    corecore