30 research outputs found
Recommended from our members
ReSCon '10, Research Student Conference: Book of Abstracts
The third SED Research Student Conference (ReSCon2010) was hosted over three days, 21-23 June 2010, in the Hamilton Centre at Brunel University. The conference consisted of oral and poster presentations, which showcased the high quality and diversity of the research being conducted within the School of Engineering and Design. The abstracts and presentations were the result of ongoing research by postgraduate research students from the School. The conference is held annually, and ReSCon plays a key role in contributing to research and innovations within the School
Actas da 10ª Conferência sobre Redes de Computadores
Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio
User-centric power-friendly quality-based network selection strategy for heterogeneous wireless environments
The ‘Always Best Connected’ vision is built around the scenario of a mobile user seamlessly roaming within a multi-operator multi-technology multi-terminal multi-application
multi-user environment supported by the next generation of wireless networks. In this heterogeneous environment, users equipped with multi-mode wireless mobile devices will
access rich media services via one or more access networks. All these access networks may differ in terms of technology, coverage range, available bandwidth, operator, monetary cost, energy usage etc. In this context, there is a need for a smart network selection decision to be made, to choose the best available network option to cater for the user’s current application and requirements. The decision is a difficult one, especially given the number and dynamics of the possible input parameters. What parameters are used and how those parameters model the application requirements and user needs is important. Also, game theory approaches can be used to model and analyze the cooperative or competitive interaction between the rational decision makers involved, which are users, seeking to get good service quality at good value prices, and/or the network operators, trying to increase their revenue.
This thesis presents the roadmap towards an ‘Always Best Connected’ environment. The proposed solution includes an Adapt-or-Handover solution which makes use of a Signal
Strength-based Adaptive Multimedia Delivery mechanism (SAMMy) and a Power-Friendly Access Network Selection Strategy (PoFANS) in order to help the user in taking
decisions, and to improve the energy efficiency at the end-user mobile device. A Reputation-based System is proposed, which models the user-network interaction as a repeated cooperative game following the repeated Prisoner’s Dilemma game from Game Theory. It combines reputation-based systems, game theory and a network selection mechanism in order to create a reputation-based heterogeneous environment. In this environment, the users keep track of their individual history with the visited networks. Every time, a user connects to a network the user-network interaction game is played. The outcome of the game is a network reputation factor which reflects the network’s previous behavior in assuring service guarantees to the user. The network reputation factor will impact the decision taken by the user next time, when he/she will have to decide whether to connect or not to that specific network. The performance of the proposed solutions was evaluated through in-depth analysis and both simulation-based and experimental-oriented testing. The results clearly show improved performance of the proposed solutions in comparison with other similar state-of-the-art solutions. An energy consumption study for a Google Nexus One streaming adaptive multimedia was performed, and a comprehensive survey on related Game Theory research are provided as part of the work
Middleware de comunicações para a internet móvel futura
Doutoramento em Informática (MAP-I)A evolução constante em novas tecnologias que providenciam suporte à forma como os nossos dispositivos se ligam, bem como a forma como utilizamos diferentes capacidades e serviços on-line, criou um conjunto sem precedentes de novos desafios que motivam o desenvolvimento de uma recente área de investigação, denominada de Internet Futura. Nesta nova área de investigação, novos aspectos arquiteturais estão ser desenvolvidos, os quais, através da re-estruturação de componentes nucleares subjacentesa que compõem a Internet, progride-a de uma forma capaz de não são fazer face a estes novos desafios, mas também de a preparar para os desafios de amanhã. Aspectos chave pertencendo a este conjunto de desafios são os ambientes de rede heterogéneos compostos por diferentes tipos de redes de acesso, a cada vez maior mudança do tráfego peer-to-peer (P2P) como o tipo de tráfego mais utilizado na Internet, a orquestração de cenários da Internet das Coisas (IoT) que exploram mecanismos de interação Maquinaa-Maquina (M2M), e a utilização de mechanismos centrados na informação
(ICN). Esta tese apresenta uma nova arquitetura capaz de simultaneamente
fazer face a estes desafios, evoluindo os procedimentos de conectividade e
entidades envolvidas, através da adição de uma camada de middleware, que
age como um mecanismo de gestão de controlo avançado. Este mecanismo
de gestão de controlo aproxima as entidades de alto nível (tais como
serviços, aplicações, entidades de gestão de mobilidade, operações de encaminhamento, etc.) com as componentes das camadas de baixo nível
(por exemplo, camadas de ligação, sensores e atuadores), permitindo uma
otimização conjunta dos procedimentos de ligação subjacentes. Os resultados
obtidos não só sublinham a flexibilidade dos mecanismos que compoem
a arquitetura, mas também a sua capacidade de providenciar aumentos de
performance quando comparados com outras soluÇÕes de funcionamento
especÍfico, enquanto permite um maior leque de cenáios e aplicações.The constant evolution in new technologies that support the way our devices
are able to connect, as well the way we use available on-line services and capabilities, has created a set of unprecedented new challenges that motivated
the development of a recent research trend known as the Future Internet.
In this research trend, new architectural aspects are being developed which,
through the restructure of underlying core aspects composing the Internet,
reshapes it in a way capable of not only facing these new challenges,
but also preparing it to tackle tomorrow’s new set of complex issues. Key
aspects belonging to this set of challenges are heterogeneous networking
environments composed by di↵erent kinds of wireless access networks, the
evergrowing change from peer-to-peer (P2P) to video as the most used kind
of traffic in the Internet, the orchestration of Internet of Things (IoT) scenarios exploiting Machine-to-Machine (M2M) interactions, and the usage of
Information-Centric Networking (ICN). This thesis presents a novel framework
able to simultaneous tackle these challenges, empowering connectivity
procedures and entities with a middleware acting as an advanced control
management mechanism. This control management mechanism brings together
both high-level entities (such as application services, mobility management
entities, routing operations, etc.) with the lower layer components
(e.g., link layers, sensor devices, actuators), allowing for a joint optimization of the underlying connectivity and operational procedures. Results highlight not only the flexibility of the mechanisms composing the framework, but also their ability in providing performance increases when compared with other specific purpose solutions, while allowing a wider range of scenarios and deployment possibilities
Software Defined Applications in Cellular and Optical Networks
abstract: Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has been introduced to address the bottleneck. The Sm-GW flexibly schedules uplink transmissions for the eNBs. Based on software defined networking (SDN) a management mechanism that allows multiple operator to flexibly inter-operate via multiple Sm-GWs with a multitude of small cells has been proposed. This dissertation also comprehensively survey the studies that examine the SDN paradigm in optical networks. Along with the PHY functional split improvements, the performance of Distributed Converged Cable Access Platform (DCCAP) in the cable architectures especially for the Remote-PHY and Remote-MACPHY nodes has been evaluated. In the PHY functional split, in addition to the re-use of infrastructure with a common FFT module for multiple technologies, a novel cross functional split interaction to cache the repetitive QAM symbols across time at the remote node to reduce the transmission rate requirement of the fronthaul link has been proposed.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Recommended from our members
Measurement-Driven Algorithm and System Design for Wireless and Datacenter Networks
The growing number of mobile devices and data-intensive applications pose unique challenges for wireless access networks as well as datacenter networks that enable modern cloud-based services. With the enormous increase in volume and complexity of traffic from applications such as video streaming and cloud computing, the interconnection networks have become a major performance bottleneck. In this thesis, we study algorithms and architectures spanning several layers of the networking protocol stack that enable and accelerate novel applications and that are easily deployable and scalable. The design of these algorithms and architectures is motivated by measurements and observations in real world or experimental testbeds.
In the first part of this thesis, we address the challenge of wireless content delivery in crowded areas. We present the AMuSe system, whose objective is to enable scalable and adaptive WiFi multicast. AMuSe is based on accurate receiver feedback and incurs a small control overhead. This feedback information can be used by the multicast sender to optimize multicast service quality, e.g., by dynamically adjusting transmission bitrate. Specifically, we develop an algorithm for dynamic selection of a subset of the multicast receivers as feedback nodes which periodically send information about the channel quality to the multicast sender. Further, we describe the Multicast Dynamic Rate Adaptation (MuDRA) algorithm that utilizes AMuSe's feedback to optimally tune the physical layer multicast rate. MuDRA balances fast adaptation to channel conditions and stability, which is essential for multimedia applications.
We implemented the AMuSe system on the ORBIT testbed and evaluated its performance in large groups with approximately 200 WiFi nodes. Our extensive experiments demonstrate that AMuSe can provide accurate feedback in a dense multicast environment. It outperforms several alternatives even in the case of external interference and changing network conditions. Further, our experimental evaluation of MuDRA on the ORBIT testbed shows that MuDRA outperforms other schemes and supports high throughput multicast flows to hundreds of nodes while meeting quality requirements. As an example application, MuDRA can support multiple high quality video streams, where 90% of the nodes report excellent or very good video quality.
Next, we specifically focus on ensuring high Quality of Experience (QoE) for video streaming over WiFi multicast. We formulate the problem of joint adaptation of multicast transmission rate and video rate for ensuring high video QoE as a utility maximization problem and propose an online control algorithm called DYVR which is based on Lyapunov optimization techniques. We evaluated the performance of DYVR through analysis, simulations, and experiments using a testbed composed of Android devices and o the shelf APs. Our evaluation shows that DYVR can ensure high video rates while guaranteeing a low but acceptable number of segment losses, buffer underflows, and video rate switches.
We leverage the lessons learnt from AMuSe for WiFi to address the performance issues with LTE evolved Multimedia Broadcast/Multicast Service (eMBMS). We present the Dynamic Monitoring (DyMo) system which provides low-overhead and real-time feedback about eMBMS performance. DyMo employs eMBMS for broadcasting instructions which indicate the reporting rates as a function of the observed Quality of Service (QoS) for each UE. This simple feedback mechanism collects very limited QoS reports which can be used for network optimization. We evaluated the performance of DyMo analytically and via simulations. DyMo infers the optimal eMBMS settings with extremely low overhead, while meeting strict QoS requirements under different UE mobility patterns and presence of network component failures.
In the second part of the thesis, we study datacenter networks which are key enablers of the end-user applications such as video streaming and storage. Datacenter applications such as distributed file systems, one-to-many virtual machine migrations, and large-scale data processing involve bulk multicast flows. We propose a hardware and software system for enabling physical layer optical multicast in datacenter networks using passive optical splitters. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Our evaluation shows that the optical multicast architecture can achieve higher throughput and lower latency than IP multicast and peer-to-peer multicast schemes with lower switching energy consumption.
Finally, we study the problem of congestion control in datacenter networks. Quantized Congestion Control (QCN), a switch-supported standard, utilizes direct multi-bit feedback from the network for hardware rate limiting. Although QCN has been shown to be fast-reacting and effective, being a Layer-2 technology limits its adoption in IP-routed Layer 3 datacenters. We address several design challenges to overcome QCN feedback's Layer- 2 limitation and use it to design window-based congestion control (QCN-CC) and load balancing (QCN-LB) schemes. Our extensive simulations, based on real world workloads, demonstrate the advantages of explicit, multi-bit congestion feedback, especially in a typical environment where intra-datacenter traffic with short Round Trip Times (RTT: tens of s) run in conjunction with web-facing traffic with long RTTs (tens of milliseconds)
Multimedia over wireless ip networks:distortion estimation and applications.
2006/2007This thesis deals with multimedia communication over unreliable and resource
constrained IP-based packet-switched networks. The focus is on estimating, evaluating
and enhancing the quality of streaming media services with particular regard
to video services. The original contributions of this study involve mainly the
development of three video distortion estimation techniques and the successive
definition of some application scenarios used to demonstrate the benefits obtained
applying such algorithms. The material presented in this dissertation is the result
of the studies performed within the Telecommunication Group of the Department
of Electronic Engineering at the University of Trieste during the course of Doctorate
in Information Engineering.
In recent years multimedia communication over wired and wireless packet based
networks is exploding. Applications such as BitTorrent, music file sharing, multimedia
podcasting are the main source of all traffic on the Internet. Internet radio
for example is now evolving into peer to peer television such as CoolStreaming.
Moreover, web sites such as YouTube have made publishing videos on demand
available to anyone owning a home video camera. Another challenge in the multimedia
evolution is inside the house where videos are distributed over local WiFi
networks to many end devices around the house. More in general we are assisting
an all media over IP revolution, with radio, television, telephony and stored media
all being delivered over IP wired and wireless networks. All the presented applications
require an extreme high bandwidth and often a low delay especially for
interactive applications. Unfortunately the Internet and the wireless networks provide
only limited support for multimedia applications. Variations in network conditions
can have considerable consequences for real-time multimedia applications
and can lead to unsatisfactory user experience. In fact, multimedia applications
are usually delay sensitive, bandwidth intense and loss tolerant applications. In order
to overcame this limitations, efficient adaptation mechanism must be derived
to bridge the application requirements with the transport medium characteristics.
Several approaches have been proposed for the robust transmission of multimedia
packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques
are based on developing efficient QoS architectures at the network layer or at the
data link layer where routers or specialized devices apply different forwarding
behaviors to packets depending on the value of some field in the packet header.
Using such network architecture, video packets are assigned to classes, in order
to obtain a different treatment by the network; in particular, packets assigned to
the most privileged class will be lost with a very small probability, while packets
belonging to the lowest priority class will experience the traditional best–effort
service. But the key problem in this solution is how to assign optimally video
packets to the network classes. One way to perform the assignment is to proceed
on a packet-by-packet basis, to exploit the highly non-uniform distortion impact
of compressed video. Working on the distortion impact of each individual video
packet has been shown in recent years to deliver better performance than relying
on the average error sensitivity of each bitstream element. The distortion impact
of a video packet can be expressed as the distortion that would be introduced at
the receiver by its loss, taking into account the effects of both error concealment
and error propagation due to temporal prediction.
The estimation algorithms proposed in this dissertation are able to reproduce accurately
the distortion envelope deriving from multiple losses on the network and
the computational complexity required is negligible in respect to those proposed in
literature. Several tests are run to validate the distortion estimation algorithms and
to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained
using the developed algorithms. The packet distortion impact is inserted in each
video packet and transmitted over the network where specialized agents manage
the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily
the distortion impact estimated by the transmitter. The results obtained will show
that, in each scenario, a significant improvement may be obtained with respect to
traditional transmission policies.
The thesis is organized in two parts. The first provides the background material
and represents the basics of the following arguments, while the other is dedicated
to the original results obtained during the research activity.
Referring to the first part in the first chapter it summarized an introduction to
the principles and challenges for the multimedia transmission over packet networks.
The most recent advances in video compression technologies are detailed
in the second chapter, focusing in particular on aspects that involve the resilience
to packet loss impairments. The third chapter deals with the main techniques
adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in
network adaptive media transport detailing the techniques that prioritize the video
packet flow. The fifth chapter makes a literature review of the existing distortion
estimation techniques focusing mainly on their limitation aspects.
The second part of the thesis describes the original results obtained in the modelling
of the video distortion deriving from the transmission over an error prone
network. In particular, the sixth chapter presents three new distortion estimation
algorithms able to estimate the video quality and shows the results of some validation
tests performed to measure the accuracy of the employed algorithms. The
seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side.
Finally, the eight chapter summarizes the thesis contributions and remarks the
most important conclusions. It also derives some directions for future improvements.
The intent of the entire work presented hereafter is to develop some video distortion
estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti
multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda.
L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento
della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima.
I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione.
Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando
parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer
to peer per più avanzati per la diffusione della TV via web come CoolStreaming.
Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/
distribuzione di video creati da chiunque abbia una semplice video camera.
Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo
assistendo è la diffusione dei video anche all’interno delle case dove i contenuti
multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti
IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti
sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle
applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale.
Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di
banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite
che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per
migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano
dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche
di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale.
E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo
per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti
rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video.
Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa
nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima.
La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce
il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre
la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante
l’intera attività di ricerca.
In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto
viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione
video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle
tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali.
La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite.
In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre
fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno
proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati
possono essere utilizzati per migliorare in maniera significativa la qualità percepita
dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli
sviluppi futuri dell’attività di ricerca.
L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti
dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni
scenari applicativi di utilizzo.XIX Ciclo197
Advanced Resource Management Techniques for Next Generation Wireless Networks
The increasing penetration of mobile devices in everyday life is posing a broad range of research challenges to meet such a massive data demand. Mobile users seek connectivity "anywhere, at anytime". In addition, killer applications with multimedia contents, like video transmissions, require larger amounts of resources to cope with tight quality constraints. Spectrum scarcity and interference issues represent the key aspects of next generation wireless networks. Consequently, designing proper resource management solutions is critical. To this aim, we first propose a model to better assess the performance of Orthogonal Frequency-Division Multiple Access (OFDMA)-based simulated cellular networks. A link abstraction of the downlink data transmission can provide an accurate performance metric at a low computational cost. Our model combines Mutual Information-based multi-carrier compression metrics with Link-Level performance profiles, thus expressing the dependency of the transmitted data Block Error Rate (BLER) on the SINR values and on the modulation and coding scheme (MCS) being assigned. In addition, we aim at evaluating the impact of Jumboframes transmission in LTE networks, which are packets breaking the 1500-byte legacy value. A comparative evaluation is performed based on diverse network configuration criteria, thus highlighting specific limitations. In particular, we observed rapid buffer saturation under certain circumstances, due to the transmission of oversized packets with scarce radio resources. A novel cross-layer approach is proposed to prevent saturation, and thus tune the transmitted packet size with the instantaneous channel conditions, fed back through standard CQI-based procedures. Recent advances in wireless networking introduce the concept of resource sharing as one promising way to enhance the performance of radio communications. As the wireless spectrum is a scarce resource, and its usage is often found to be inefficient, it may be meaningful to design solutions where multiple operators join their efforts, so that wireless access takes place on shared, rather than proprietary to a single operator, frequency bands. In spite of the conceptual simplicity of this idea, the resulting mathematical analysis may be very complex, since it involves analytical representation of multiple wireless channels. Thus, we propose an evaluative tool for spectrum sharing techniques in OFDMA-based wireless networks, where multiple sharing policies can be easily integrated and, consequently, evaluated. On the other hand, relatively to contention-based broadband wireless access, we target an important issue in mobile ad hoc networks: the intrinsic inefficiency of the
standard transmission control protocol (TCP), which presents degraded performance mainly due to mechanisms such as congestion control and avoidance. In fact, TCP was originally designed for wired networks, where packet losses indicate congestion. Conversely, channels in wireless networks might vary rapidly, thus most loss events are due to channel errors
or link layer contention. We aim at designing a light-weight cross-layer
framework which, differently from many other works in the literature, is based on the cognitive network paradigm. It includes an observation phase, i.e., a training set in which the network parameters are collected; a learning
phase, in which the information to be used is extracted from the data; a planning phase, in which we define the strategies to trigger; an acting phase,
which corresponds to dynamically applying such strategies during network simulations. The next generation mobile infrastructure frontier relies on the concept of heterogeneous networks. However, the existence of multiple types of access nodes poses new challenges such as more stringent interference constraints due to node densification and self-deployed access. Here, we propose methods that aim at extending femto cells coverage range by enabling idle User Equipments (UE) to serve as relays. This way, UEs otherwise connected to macro cells can be offloaded to femto cells through UE relays. A joint resource allocation and user association scheme based on the solutions of a convex optimization problem is proposed. Another challenging issue to be addressed in such scenarios is admission control, which is in charge of ensuring that, when a new resource reservation is accepted, previously connected users continue having their QoS guarantees honored. Thus, we consider different approaches to compute the aggregate projected capacity in OFDMA-based networks, and propose the E-Diophantine solution, whose mathematical foundation is provided along with the performance improvements to be expected, both in accuracy and computational terms