770 research outputs found

    Network-coded NOMA with antenna selection for the support of two heterogeneous groups of users

    Get PDF
    The combination of Non-Orthogonal Multiple Access (NOMA) and Transmit Antenna Selection (TAS) techniques has recently attracted significant attention due to the low cost, low complexity and high diversity gains. Meanwhile, Random Linear Coding (RLC) is considered to be a promising technique for achieving high reliability and low latency in multicast communications. In this paper, we consider a downlink system with a multi-antenna base station and two multicast groups of single-antenna users, where one group can afford to be served opportunistically, while the other group consists of comparatively low power devices with limited processing capabilities that have strict Quality of Service (QoS) requirements. In order to boost reliability and satisfy the QoS requirements of the multicast groups, we propose a cross-layer framework including NOMAbased TAS at the physical layer and RLC at the application layer. In particular, two low complexity TAS protocols for NOMA are studied in order to exploit the diversity gain and meet the QoS requirements. In addition, RLC analysis aims to facilitate heterogeneous users, such that, sliding window based sparse RLC is employed for computational restricted users, and conventional RLC is considered for others. Theoretical expressions that characterize the performance of the proposed framework are derived and verified through simulation results

    Coding in 802.11 WLANs

    Get PDF
    Forward error correction (FEC) coding is widely used in communication systems to correct transmis- sion errors. In IEEE 802.11a/g transmitters, convolutional codes are used for FEC at the physical (PHY) layer. As is typical in wireless systems, only a limited choice of pre-speci¯ed coding rates is supported. These are implemented in hardware and thus di±cult to change, and the coding rates are selected with point to point operation in mind. This thesis is concerned with using FEC coding in 802.11 WLANs in more interesting ways that are better aligned with application requirements. For example, coding to support multicast tra±c rather than simple point to point tra±c; coding that is cognisant of the multiuser nature of the wireless channel; and coding which takes account of delay requirements as well as losses. We consider layering additional coding on top of the existing 802.11 PHY layer coding, and investigate the tradeo® between higher layer coding and PHY layer modulation and FEC coding as well as MAC layer scheduling. Firstly we consider the joint multicast performance of higher-layer fountain coding concatenated with 802.11a/g OFDM PHY modulation/coding. A study on the optimal choice of PHY rates with and without fountain coding is carried out for standard 802.11 WLANs. We ¯nd that, in contrast to studies in cellular networks, in 802.11a/g WLANs the PHY rate that optimizes uncoded multicast performance is also close to optimal for fountain-coded multicast tra±c. This indicates that in 802.11a/g WLANs cross-layer rate control for higher-layer fountain coding concatenated with physical layer modulation and FEC would bring few bene¯ts. Secondly, using experimental measurements taken in an outdoor environment, we model the chan- nel provided by outdoor 802.11 links as a hybrid binary symmetric/packet erasure channel. This hybrid channel o®ers capacity increases of more than 100% compared to a conventional packet erasure channel (PEC) over a wide range of RSSIs. Based upon the established channel model, we further consider the potential performance gains of adopting a binary symmetric channel (BSC) paradigm for multi-destination aggregations in 802.11 WLANs. We consider two BSC-based higher-layer coding approaches, i.e. superposition coding and a simpler time-sharing coding, for multi-destination aggre- gated packets. The performance results for both unicast and multicast tra±c, taking account of MAC layer overheads, demonstrate that increases in network throughput of more than 100% are possible over a wide range of channel conditions, and that the simpler time-sharing approach yields most of these gains and have minor loss of performance. Finally, we consider the proportional fair allocation of high-layer coding rates and airtimes in 802.11 WLANs, taking link losses and delay constraints into account. We ¯nd that a layered approach of separating MAC scheduling and higher-layer coding rate selection is optimal. The proportional fair coding rate and airtime allocation (i) assigns equal total airtime (i.e. airtime including both successful and failed transmissions) to every station in a WLAN, (ii) the station airtimes sum to unity (ensuring operation at the rate region boundary), and (iii) the optimal coding rate is selected to maximise goodput (treating packets decoded after the delay deadline as losses)

    Analysis and evaluation of in-home networks based on HomePlug-AV power line communications

    Get PDF
    [ESP] No hace mucho tiempo, las redes in-home (también denominadas redes domésticas) únicamente se utilizaban para interconectar los diferentes ordenadores de una vivienda, de manera que pudieran compartir una impresora entre ellos. Hoy en día, sin embargo, esta definición es mucho más amplia debido a la gran cantidad de dispositivos existentes en la vivienda con capacidad de conectarse a una red para transmitir y recibir información. En una red in-home actual, podemos encontrar desde teléfonos móviles equipados con conectividad WI-FI a dispositivos NAS (Network Attached Storage), utilizados para almacenar información, imágenes o videos en red, que a su vez pueden ser transferidos a televisiones de alta definición u ordenadores. A la hora de instalar una red de comunicaciones en una vivienda, se persiguen principalmente dos objetivos, reducir el coste de instalación y conseguir una gran flexibilidad de cara a futuras ampliaciones. Una red basada en tecnología PLC (Power Line Communications) cumple estos requisitos ya que, al utilizar la infraestructura de cableado eléctrico existente en la vivienda, es muy sencilla y económica de instalar y ampliar. Dentro de la tecnología PLC existen diferentes estándares, siendo HomePlug-AV (HomePlug Audio-Video o simplemente HPAV) el más extendido en la actualidad para la instalación de redes domésticas. Este estándar permite alcanzar velocidades de transmisión de hasta 200Mbps a través de los cables de baja tensión de una vivienda convencional. El objetivo principal de esta tesis doctoral es aportar nuevas ideas que mejoren las prestaciones de las redes in-home basadas en la tecnología PLC, utilizando como base el estándar Homeplug-AV. Estas redes utilizan una arquitectura centralizada, en la que la mayor parte de la inteligencia de red está concentrada en un coordinador central (CCo, por sus siglas en inglés). Por lo tanto, la mayor parte de las modificaciones propuestas irán encaminadas a mejorar dicho dispositivo, que podrá llegar a convertirse en un gestor de red capaz de manejar conjuntamente interfaces de diferentes tecnologías. En primer lugar, se presenta un análisis detallado del comportamiento del estándar en diferentes situaciones que se pueden producir de manera común en una red doméstica. Este análisis se realizó tanto con dispositivos reales como mediante simulación. Para el segundo tipo de medidas, se diseñó un simulador de la tecnología HomePlug que implementa el nivel físico y el nivel MAC de la misma, junto con modelos de los servicios más utilizados en entornos domésticos. Este simulador se utilizó tanto para estas medidas iniciales como para evaluar las diferentes modificaciones del estándar propuestas posteriormente en este trabajo. Este análisis proporcionó dos resultados significativos. En primer lugar, se comprobó que al introducir un modelo real de nivel físico al protocolo CSMA/CA utilizado a nivel MAC se producían resultados muy diferentes a los presentados en los modelos publicados hasta ese momento. Por ello, se propuso un modelo matemático que incorporaba dichos efectos. En segundo lugar, se identificaron diferentes áreas de la tecnología que eran susceptibles de mejora. El resto de la tesis se centró entonces en la mejora de dichos puntos débiles. El primero de estos puntos débiles está relacionado con las transmisión de datos unicast. El medio PLC es selectivo en frecuencia y muy dependiente del tiempo y de la localización de las estaciones. Incluso es posible que, en un mismo enlace, la capacidad de los enlaces ascendente y descendente sea distinta. En estos entornos, la utilización del protocolo de transporte TCP presenta serios problemas, ya que define gran parte de sus parámetros en función del Round Trip time (RTT) del enlace. Como alternativa se pensó en los códigos Fountain. Este tipo de codificación de fuente permite realizar transmisiones fiables de datos sin necesidad de utilizar un canal de retorno, evitando de esta forma los problemas derivados de las asimetrías de la red. Se realizaron varios experimentos comparando ambas soluciones, y se comprobó que las prestaciones de este tipo de codificaciones superan al protocolo TCP a la hora de transmitir ficheros de manera fiable a través de las redes PLC. Además, los códigos Fountain también se utilizaron para el diseño de otra aplicación. Es muy común que en un escenario doméstico haya disponible más de una tecnología (Wi-Fi, Ethernet, PLC, etc). Tenemos por tanto que una aplicación capaz de integrar interfaces de diferentes tecnologías podría ser muy útil en estos entornos, ya que se podría conseguir un mayor ancho de banda, mayor tolerancia a errores, balanceo de carga, etc. El kernel de Linux dispone de un módulo denominado Bonding que permite agrupar diferentes interfaces Ethernet. Sin embargo, no está preparado para agrupar interfaces de diferentes tecnologías, y mucho menos para tecnologás de capacidad variable como es el caso de PLC o de las comunicaciones inalámbricas. Por ello, se realizó una modificación de dicho driver utilizando para ello los códigos Fountain, que solucionan los problemas que se pueden producir debido a las variaciones de capacidad. Por otra parte, con la actual versión del estándar HomePlug AV, las comunicaciones multicast presentan unas prestaciones muy pobres. Esto es debido a que, a pesar de que el canal PLC es broadcast, la naturaleza de la modulación OFDM (Ortogonal Frequency Division Multiplexing) que se utiliza a nivel físico es punto a punto. Esto hace que las transmisiones simultáneas a un grupo de receptores se traduzcan automáticamente en sucesivas transmisiones punto a punto a los diferentes miembros del grupo. Con esta técnica, la capacidad efectiva de transmisión multicast disminuye de manera muy importante a medida que aumenta el número de receptores. En este trabajo se han propuesto dos técnicas alternativas. La primera consiste en la utilización de un mapa de tonos común para todos los miembros del grupo multicast, asignado a estas comunicaciones los parámetros de modulación del cliente con las peores condiciones de canal. Este algoritmo ha sido tradicionalmente descartado en los sistemas OFDM por sus bajas prestaciones. Sin embargo, la correlación existente entre los diferentes canales de una red PLC hace que su comportamiento sea mucho mejor. Además, se propuso un segundo algoritmo que utilizaba técnicas de optimización para maximizar la tasa de comunicación multicast, obteniendo un mejor comportamiento cuando el número de clientes es elevado. Por último, en redes de capacidad física variable, como es el caso de las redes PLC, las técnicas cross-layer están despertando un gran interés. Este tipo de algoritmos están basado en la compartición de información entre diferentes capas de la estructura OSI para mejorar el comportamiento del sistema. En este trabajo se ha propuesto un algoritmo que modifica los parámetros del protocolo CSMA/CA de nivel MAC utilizando información de nivel físico y los requerimientos de QoS del servicio de niveles superiores. De esta forma se consigue dar prioridad en el acceso al medio a los clientes con problemas de QoS, mejorando de esta forma del comportamiento de la red. Este algoritmo ha sido evaluado mediante simulación en un escenario doméstico típico, comprobando que ofrece unos resultados muy prometedores. [ENG] Not very long time ago, in-home networks (also called domestic networks) were only used to share a printer between a number of computers. Nowadays, however, due to the huge amount of devices present at home with communication capabilities, this definition has become much wider. In a current in-home network we can find, from mobile phones with wireless connectivity, or NAS (Network Attached Storage) devices sharing multimedia content with high-definition televisions or computers. When installing a communications network in a home, two objectives are mainly pursued: Reducing cost and high flexibility in supporting future network requirements. A network based on Power Line Communications (PLC) technology is able to fulfill these objectives, since as it uses the low voltage wiring already available at home, it is very easy to install and expand, providing a cost-effective solution for home environments. There are different PLC standards, being HomePlug-AV (HomePlug Audio-Video, or simply HPAV) the most widely used nowadays. This standard is able to achieve transmission rates up to 200 Mpbs through the electrical wiring of a typical home. The main objective of this thesis is to provide new ideas to improve the performance of PLC technology based in-home networks, using as starting point the HPAV standard. A network based on this technology uses a centralized architecture, in which the most important part of the network intelligence is concentrated in a single device, the Central Coordinator (CCo). Hence, most of the modifications proposed in this work will try to improve this particular device, which can even become a multi-technology central manager, able to combine interfaces of different technologies to improve the network performance. Initially, it is presented a detailed analysis of HPAV performance in some scenarios typically found in a home environment. It was done through simulation and by experimentation using real devices. To obtain the former results, it was designed a HPAV simulator which implements the physical (PHY) and medium access control (MAC) layers of the standard, together with a traffic modeling module which implements the services most commonly found in a home network. This simulation tool was used both in these initial measurements and to evaluate the standard modifications that are proposed in this work. This analysis provides two main results. Firstly, it was found that when a real PHY model is used together with the CSMA/CA MAC protocol the simulation results were very different to those obtained with previously presented mathematical models of this protocol. Hence, it was proposed a new model that considers these effects. Next, some areas of the technology which could be improved were identified. The rest of the thesis was then centered around proposing solutions to these weaknesses. The first weakness solved is related to unicast data transmission. PLC medium is frequency selective and time variant, and it presents a remarkable variation among locations or depending on the connected loads. Even in a single link, the channel capacities between transmitter and receiver can be very asymmetric. In such environments, the use of TCP as transport protocol presents serious problems, since it defines some of its parameters according to the Round Trip Time (RTT). Alternatively, the use of Fountain codes for reliable data transmission in these environments was proposed. These codes allow to transmit information without a feedback channel, overcoming in this way the problems related to the variability of the channel. Different experiments were performed comparing both solutions, concluding that in PLC based networks the performance achieved by Fountain codes outperforms the results obtained with a TCP-based application. In addition, Fountain codes were also used for another application. In home environments, it is very common to find more than one available technology to deploy a network (Wi-Fi, Ethernet, PLC, etc). Therefore, an application that makes possible the aggregation of different interfaces would be very useful, as it will provide higher bandwidth, fault tolerance and load balancing. The Linux Kernel contains a driver (Bonding) which allows Ethernet interfaces aggregation. However, it is not prepared for asymmetric interfaces aggregation and even less for variable capacity technologies like PLC or Wi-Fi. In this work, it is presented a modification of this driver which uses Fountain codes to solve the problems that may arise when asymmetric interfaces are aggregated. On another note, multicast communications in the actual HPAV standard versions presents serious problems. This is because, although PLC medium is broadcast by nature, the Orthogonal Frequency Division Multiplexing (OFDM) modulation used at PHY layer is always point to point. Therefore, multicast communications are carried out as successive point-to-point transmissions to the different members of the group. This technique clearly degrades the performance of multicast services as the number of receivers increases. In this work, they have been proposed two alternative algorithms. The first one consists of using a common tone map for all the multicast group members. This tone map corresponds to the modulation parameters obtained for the client with the worst channel conditions. This algorithm has been traditionally discarded in OFDM systems because of its poor performance. However, in contrast to other technologies (like wireless for example), channel responses in a given PLC network exhibit significant correlation among them. This reduces the differences among the users, improving the performance of this algorithm. In addition, another technique which uses an optimization algorithm to maximize the multicast bit rate is also evaluated, obtaining that its use can be suitable when the number of multicast clients is high. Finally, due to the properties of PLC medium, cross-layer technique are eliciting a big interest. These algorithms are based in the information sharing between adjacent layers in the OSI model to improve the system behavior. In this work, it has been proposed an extension of the HPAV CSMA/CA algorithm which modifies the protocol parameters using PHY layer information and the QoS requirements of the upper-layer services. In this way, priority access to the channel can be provided to the nodes with QoS problems, improving the whole network performance. This algorithm has been evaluated through simulation in a typical home environment with very promising results.Universidad Politécnica de Cartagen

    Multi-user video streaming using unequal error protection network coding in wireless networks

    Get PDF
    In this paper, we investigate a multi-user video streaming system applying unequal error protection (UEP) network coding (NC) for simultaneous real-time exchange of scalable video streams among multiple users. We focus on a simple wireless scenario where users exchange encoded data packets over a common central network node (e.g., a base station or an access point) that aims to capture the fundamental system behaviour. Our goal is to present analytical tools that provide both the decoding probability analysis and the expected delay guarantees for different importance layers of scalable video streams. Using the proposed tools, we offer a simple framework for design and analysis of UEP NC based multi-user video streaming systems and provide examples of system design for video conferencing scenario in broadband wireless cellular networks

    Error and Congestion Resilient Video Streaming over Broadband Wireless

    Get PDF
    In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine

    Opportunistic error correction for OFDM-based DVB systems

    Get PDF
    DVB-T2 (second generation terrestrial digital video broadcasting) employs LDPC (Low Density Parity Check) codes combined with BCH (Bose-Chaudhuri-Hocquengham) codes, which has a better performance in comparison to convolutional and Reed-Solomon codes used in other OFDM-based DVB systems. However, the current FEC layer in the DVB-T2standard is still not optimal. In this paper, we propose a novel error correction scheme based on fountain codes for OFDM-based DVB systems. The key element in this new scheme is that only packets are processed by the receiver which has encountered high-energy channels. Others are discarded. To achieve a data rate of 9.5 Mbits/s, this new approach has a SNR gain of at least 10 dB with perfect channel knowledge and 11 dB with non-perfect channel knowledge in comparison to the current FEC layer in the DVB-T2standard. With a low-complexity interpolation-based channel estimation algorithm, opportunistic error correction offers us a QEF (Quasi Error Free) quality with a maximum DF(Doppler Frequency) of 40 Hz but the current DVB-T2 FEC layer can only provide a BER of 10−7 quality after BCH decoding with a maximum DF of 20 Hz
    corecore