12 research outputs found

    Impulsive noise cancellation and channel estimation in power line communication systems

    Get PDF
    Power line communication (PLC) is considered as the most viable enabler of the smart grid. PLC exploits the power line infrastructure for data transmission and provides an economical communication backbone to support the requirements of smart grid applications. Though PLC brings a lot of benefits to the smart grid implementation, impairments such as frequency selective attenuation of the high-frequency communication signal, the presence of impulsive noise (IN) and the narrowband interference (NBI) from closely operating wireless communication systems, make the power line a hostile environament for reliable data transmission. Hence, the main objective of this dissertation is to design signal processing algorithms that are specifically tailored to overcome the inevitable impairments in the power line environment. First, we propose a novel IN mitigation scheme for PLC systems. The proposed scheme actively estimates the locations of IN samples and eliminates the effect of IN only from the contaminated samples of the received signal. By doing so, the typical problem encountered while mitigating the IN is avoided by using passive IN power suppression algorithms, where samples besides the ones containing the IN are also affected creating additional distortion in the received signal. Apart from the IN, the PLC transmission is also impaired by NBI. Exploiting the duality of the problem where the IN is impulsive in the time domain and the NBI is impulsive in the frequency domain, an extended IN mitigation algorithm is proposed in order to accurately estimate and effectively cancel both impairments from the received signal. The numerical validation of the proposed schemes shows improved BER performance of PLC systems in the presence of IN and NBI. Secondly, we pay attention to the problem of channel estimation in the power line environment. The presence of IN makes channel estimation challenging for PLC systems. To accurately estimate the channel, two maximumlikelihood (ML) channel estimators for PLC systems are proposed in this thesis. Both ML estimators exploit the estimated IN samples to determine the channel coefficients. Among the proposed channel estimators, one treats the estimated IN as a deterministic quantity, and the other assumes that the estimated IN is a random quantity. The performance of both estimators is analyzed and numerically evaluated to show the superiority of the proposed estimators in comparison to conventional channel estimation strategies in the presence of IN. Furthermore, between the two proposed estimators, the one that is based on the random approach outperforms the deterministic one in all typical PLC scenarios. However, the deterministic approach based estimator can perform consistent channel estimation regardless of the IN behavior with less computational effort and becomes an efficient channel estimation strategy in situations where high computational complexity cannot be afforded. Finally, we propose two ML algorithms to perform a precise IN support detection. The proposed algorithms perform a greedy search of the samples in the received signal that are contaminated by IN. To design such algorithms, statistics defined for deterministic and random ML channel estimators are exploited and two multiple hypothesis tests are built according to Bonferroni and Benjamini and Hochberg design criteria. Among the proposed estimators, the random ML-based approach outperforms the deterministic ML-based approach while detecting the IN support in typical power line environment. Hence, this thesis studies the power line environment for reliable data transmission to support smart grid. The proposed signal processing schemes are robust and allow PLC systems to effectively overcome the major impairments in an active electrical network.The efficient mitigation of IN and NBI and accurate estimation of channel enhances the applicability of PLC to support critical applications that are envisioned for the future electrical power grid.La comunicación a través de líneas de transmisión eléctricas (PLC) se considera uno de los habilitadores principales de la red eléctrica inteligente (smart grid). PLC explota la infraestructura de la red eléctrica para la transmisión de datos y proporciona una red troncal de comunicación económica para poder cumplir con los requisitos de las aplicaciones para smart grids. Si bien la tecnología PLC aporta muchos beneficios a la implementación de la smart grid, los impedimentos, como la atenuación selectiva en frecuencia de la señal de comunicación, la presencia de ruido impulsivo (IN) y las interferencias de banda estrecha (NBI) de los sistemas de comunicación inalámbrica de operación cercana, hacen que la red eléctrica sea un entorno hostil para la transmisión fiable de datos. En este contexto, el objetivo principal de esta tesis es diseñar algoritmos de procesado de señal que estén específicamente diseñados para superar los impedimentos inevitables en el entorno de la red eléctrica como son IN y NBI. Primeramente, proponemos un nuevo esquema de mitigación de IN en sistemas PLC. El esquema propuesto estima activamente las ubicaciones de las muestras de IN y elimina el efecto de IN solo en las muestras contaminadas de la señal recibida. Al hacerlo, el problema típico que se encuentra al mitigar el IN con técnicas tradicionales (donde también se ven afectadas otras muestras que contienen la IN, creando una distorsión adicional en la señal recibida) se puede evitar con la consiguiente mejora del rendimiento. Aparte de IN, los sistemas PLC también se ven afectados por el NBI. Aprovechando la dualidad del problema (el IN es impulsivo en el dominio del tiempo y el NBI es impulsivo en el dominio de la frecuencia), se propone un algoritmo de mitigación de IN ampliado para estimar con precisión y cancelar efectivamente ambas degradaciones de la señal recibida. La validación numérica de los esquemas propuestos muestra un mejor rendimiento en términos de tasa de error de bit (BER) en sistemas PLC con presencia de IN y NBI. En segundo lugar, prestamos atención al problema de la estimación de canal en entornos PLC. La presencia de IN hace que la estimación de canal sea un desafío para los sistemas PLC futuros. En esta tesis, se proponen dos estimadores de canal para sistemas PLC de máxima verosimilitud (ML) para sistemas PLC. Ambos estimadores ML explotan las muestras IN estimadas para determinar los coeficientes del canal. Entre los estimadores de canal propuestos, uno trata la IN estimada como una cantidad determinista, y la otra asume que la IN estimada es una cantidad aleatoria. El rendimiento de ambos estimadores se analiza y se evalúa numéricamente para mostrar la superioridad de los estimadores propuestos en comparación con las estrategias de estimación de canales convencionales en presencia de IN. Además, entre los dos estimadores propuestos, el que se basa en el enfoque aleatorio supera el determinista en escenarios PLC típicos. Sin embargo, el estimador basado en el enfoque determinista puede llevar a cabo una estimación de canal consistente independientemente del comportamiento de la IN con menos esfuerzo computacional y se convierte en una estrategia de estimación de canal eficiente en situaciones donde no es posible disponer de una alta complejidad computacionalPostprint (published version

    AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’

    Get PDF
    Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control (MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices, limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem. This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and 15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT

    Flexible cross layer optimization for fixed and mobile broadband telecommunication networks and beyond

    Get PDF
    In der heutigen Zeit, in der das Internet im Allgemeinen und Telekommunikationsnetze im Speziellen kritische Infrastrukturen erreicht haben, entstehen hohe Anforderungen und neue Herausforderungen an den Datentransport in Hinsicht auf Effizienz und Flexibilität. Heutige Telekommunikationsnetze sind jedoch rigide und statisch konzipiert, was nur ein geringes Maß an Flexibilität und Anpassungsfähigkeit der Netze ermöglicht und darüber hinaus nur im begrenzten Maße die Wichtigkeit von Datenflüssen im wiederspiegelt. Diverse Lösungsansätze zum kompletten Neuentwurf als auch zum evolutionären Konzept des Internet wurden ausgearbeitet und spezifiziert, um diese neuartigen Anforderungen und Herausforderungen adäquat zu adressieren. Einer dieser Ansätze ist das Cross Layer Optimierungs-Paradigma, welches eine bisher nicht mögliche direkte Kommunikation zwischen verteilten Funktionalitäten unterschiedlichen Typs ermöglicht, um ein höheres Maß an Dienstgüte zu erlangen. Ein wesentlicher Indikator, welcher die Relevanz dieses Ansatzes unterstreicht, zeichnet sich durch die Programmierbarkeit von Netzwerkfunktionalitäten aus, welche sich aus der Evolution von heutigen hin zu zukünftigen Netzen erkennen lässt. Dieses Konzept wird als ein vielversprechender Lösungsansatz für Kontrollmechanismen von Diensten in zukünftigen Kernnetzwerken erachtet. Dennoch existiert zur Zeit der Entstehung dieser Doktorarbeit kein Ansatz zur Cross Layer Optimierung in Festnetz-und Mobilfunknetze, welcher der geforderten Effizienz und Flexibilität gerecht wird. Die übergeordnete Zielsetzung dieser Arbeit adressiert die Konzeptionierung, Entwicklung und Evaluierung eines Cross Layer Optimierungsansatzes für Telekommunikationsnetze. Einen wesentlichen Schwerpunkt dieser Arbeit stellt die Definition einer theoretischen Konzeptionierung und deren praktischer Realisierung eines Systems zur Cross Layer Optimierung für Telekommunikationsnetze dar. Die durch diese Doktorarbeit analysierten wissenschaftlichen Fragestellungen betreffen u.a. die Anwendbarkeit von Cross Layer Optimierungsansätzen auf Telekommunikationsnetzwerke; die Betrachtung neuartiger Anforderungen; existierende Konzepte, Ansätze und Lösungen; die Abdeckung neuer Funktionalitäten durch bereits existierende Lösungen; und letztendlich den erkennbaren Mehrwert des neu vorgeschlagenen Konzepts gegenüber den bestehenden Lösungen. Die wissenschaftlichen Beiträge dieser Doktorarbeit lassen sich grob durch vier Säulen skizzieren: Erstens werden der Stand der Wissenschaft und Technik analysiert und bewertet, Anforderungen erhoben und eine Lückenanalyse vorgenommen. Zweitens werden Herausforderungen, Möglichkeiten, Limitierungen und Konzeptionierungsaspekte eines Modells zur Cross Layer Optimierung analysiert und evaluiert. Drittens wird ein konzeptionelles Modell - Generic Adaptive Resource Control (GARC) - spezifiziert, als Prototyp realisiert und ausgiebig validiert. Viertens werden theoretische und praktische Beiträge dieser Doktorarbeit vertiefend analysiert und bewertet.As the telecommunication world moves towards a data-only network environment, signaling, voice and other data are similarly transported as Internet Protocol packets. New requirements, challenges and opportunities are bound to this transition and influence telecommunication architectures accordingly. In this time in which the Internet in general, and telecommunication networks in particular, have entered critical infrastructures and systems, it is of high importance to guarantee efficient and flexible data transport. A certain level of Quality-of-Service (QoS) for critical services is crucial even during overload situations in the access and core network, as these two are the bottlenecks in the network. However, the current telecommunication architecture is rigid and static, which offers very limited flexibility and adaptability. Several concepts on clean slate as well as evolutionary approaches have been proposed and defined in order to cope with these new challenges and requirements. One of these approaches is the Cross Layer Optimization paradigm. This concept omits the strict separation and isolation of the Application-, Control- and Network-Layers as it enables interaction and fosters Cross Layer Optimization among them. One indicator underlying this trend is the programmability of network functions, which emerges clearly during the telecommunication network evolution towards the Future Internet. The concept is regarded as one solution for service control in future mobile core networks. However, no standardized approach for Cross Layer signaling nor optimizations in between the individual layers have been standardized at the time this thesis was written. The main objective of this thesis is the design, implementation and evaluation of a Cross Layer Optimization concept on telecommunication networks. A major emphasis is given to the definition of a theoretical model and its practical realization through the implementation of a Cross Layer network resource optimization system for telecommunication systems. The key questions answered through this thesis are: in which way can the Cross Layer Optimization paradigm be applied on telecommunication networks; which new requirements arise; which of the required functionalities cannot be covered through existing solutions, what other conceptual approaches already exist and finally whether such a new concept is viable. The work presented in this thesis and its contributions can be summarized in four parts: First, a review of related work, a requirement analysis and a gap analysis were performed. Second, challenges, limitations, opportunities and design aspects for specifying an optimization model between application and network layer were formulated. Third, a conceptual model - Generic Adaptive Resource Control (GARC) - was specified and its prototypical implementation was realized. Fourth, the theoretical and practical thesis contributions was validated and evaluated

    Data Communications and Network Technologies

    Get PDF
    This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence

    Data Communications and Network Technologies

    Get PDF
    This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence
    corecore