13 research outputs found

    FFT size optimization for LTE RoF in nonlinear fibre propagation

    Get PDF
    This papers investigates the performance of the fast Fourier transform (FFT) sizes- 64, 128, 256, 512 and 1024 for the orthogonal frequency division multiplexing (OFDM) scheme in 3 rd generation partnership program (3GPP)-long term evolution (LTE) and LTE-Advanced (LTE-A). This paper aims to optimize the FFT sizes with respect to quadrature phase shift keying (QPSK), 16, 64 and 256-quadrature amplitude modulation (QAM). This optimization is for the transmission of LTE signals between eNodeB (eNB) and relay node (RN) to extend the mobile coverage employing radio-over-fibre (RoF). This paper will take into account the positive frequency chirp (PFC) induced by distributed feedback laser (DFB) through direct modulation with chromatic dispersion (CD) and self phase modulation (SPM) impairments into consideration. We present the optimum optical launch power (OLP) region termed as the intermixing region between linear and nonlinear optical fibre propagation. The optimum OLP in this investigation takes place at -4 dBm which falls within the intermixing region. At the transmission rate of 200, 400, 600 and 800 Mb/s of QPSK, 16, 64 and 256-QAM, the FFT size-128 provides the optimum power penalty with average system efficiency with respect to FFT size-64 is 54% and FFT size-256 is 65%

    Network Formation Games Among Relay Stations in Next Generation Wireless Networks

    Full text link
    The introduction of relay station (RS) nodes is a key feature in next generation wireless networks such as 3GPP's long term evolution advanced (LTE-Advanced), or the forthcoming IEEE 802.16j WiMAX standard. This paper presents, using game theory, a novel approach for the formation of the tree architecture that connects the RSs and their serving base station in the \emph{uplink} of the next generation wireless multi-hop systems. Unlike existing literature which mainly focused on performance analysis, we propose a distributed algorithm for studying the \emph{structure} and \emph{dynamics} of the network. We formulate a network formation game among the RSs whereby each RS aims to maximize a cross-layer utility function that takes into account the benefit from cooperative transmission, in terms of reduced bit error rate, and the costs in terms of the delay due to multi-hop transmission. For forming the tree structure, a distributed myopic algorithm is devised. Using the proposed algorithm, each RS can individually select the path that connects it to the BS through other RSs while optimizing its utility. We show the convergence of the algorithm into a Nash tree network, and we study how the RSs can adapt the network's topology to environmental changes such as mobility or the deployment of new mobile stations. Simulation results show that the proposed algorithm presents significant gains in terms of average utility per mobile station which is at least 17.1% better relatively to the case with no RSs and reaches up to 40.3% improvement compared to a nearest neighbor algorithm (for a network with 10 RSs). The results also show that the average number of hops does not exceed 3 even for a network with up to 25 RSs.Comment: IEEE Transactions on Communications, vol. 59, no. 9, pp. 2528-2542, September 201

    Tree formation with physical layer security considerations in wireless multi-hop networks

    No full text
    Physical layer security has emerged as a promising technique that complements existing cryptographic approaches and enables the securing of wireless transmissions against eavesdropping. In this paper, the impact of optimizing physical layer security metrics on the architecture and interactions of the nodes in multi-hop wireless networks is studied. In particular, a game-theoretic framework is proposed using which a number of nodes interact and choose their optimal and secure communication paths in the uplink of a wireless multi-hop network, in the presence of eavesdroppers. To this end, a tree formation game is formulated in which the players are the wireless nodes that seek to form a network graph among themselves while optimizing their multi-hop secrecy rates or the path qualification probabilities, depending on their knowledge of the eavesdroppers' channels. To solve this game, a distributed tree formation algorithm is proposed and is shown to converge to a stable Nash network. Simulation results show that the proposed approach yields significant performance gains in terms of both the average bottleneck secrecy rate per node and the average path qualification probability per node, relative to classical best-channel algorithms and the single-hop star network. The results also assess the properties and characteristics of the resulting Nash networks.This work was supported in part by the Australian Research Council's Discovery Projects funding scheme (project no. DP110102548), and in part by an AFOSR MURI Grant (FA9550-10-1-0573)

    Investigation of Optical Modulators in Optimized Nonlinear Compensated LTE RoF System

    Full text link

    Long Term Evolution-Advanced and Future Machine-to-Machine Communication

    Get PDF
    Long Term Evolution (LTE) has adopted Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) as the downlink and uplink transmission schemes respectively. Quality of Service (QoS) provisioning is one of the primary objectives of wireless network operators. In LTE-Advanced (LTE-A), several additional new features such as Carrier Aggregation (CA) and Relay Nodes (RNs) have been introduced by the 3rd Generation Partnership Project (3GPP). These features have been designed to deal with the ever increasing demands for higher data rates and spectral efficiency. The RN is a low power and low cost device designed for extending the coverage and enhancing spectral efficiency, especially at the cell edge. Wireless networks are facing a new challenge emerging on the horizon, the expected surge of the Machine-to-Machine (M2M) traffic in cellular and mobile networks. The costs and sizes of the M2M devices with integrated sensors, network interfaces and enhanced power capabilities have decreased significantly in recent years. Therefore, it is anticipated that M2M devices might outnumber conventional mobile devices in the near future. 3GPP standards like LTE-A have primarily been developed for broadband data services with mobility support. However, M2M applications are mostly based on narrowband traffic. These standards may not achieve overall spectrum and cost efficiency if they are utilized for serving the M2M applications. The main goal of this thesis is to take the advantage of the low cost, low power and small size of RNs for integrating M2M traffic into LTE-A networks. A new RN design is presented for aggregating and multiplexing M2M traffic at the RN before transmission over the air interface (Un interface) to the base station called eNodeB. The data packets of the M2M devices are sent to the RN over the Uu interface. Packets from different devices are aggregated at the Packet Data Convergence Protocol (PDCP) layer of the Donor eNodeB (DeNB) into a single large IP packet instead of several small IP packets. Therefore, the amount of overhead data can be significantly reduced. The proposed concept has been developed in the LTE-A network simulator to illustrate the benefits and advantages of the M2M traffic aggregation and multiplexing at the RN. The potential gains of RNs such as coverage enhancement, multiplexing gain, end-to-end delay performance etc. are illustrated with help of simulation results. The results indicate that the proposed concept improves the performance of the LTE-A network with M2M traffic. The adverse impact of M2M traffic on regular LTE-A traffic such as voice and file transfer is minimized. Furthermore, the cell edge throughput and QoS performance are enhanced. Moreover, the results are validated with the help of an analytical model

    Optimizing handover performance in LTE networks containing relays

    Get PDF
    The purpose of relays in Long Term Evolution (LTE) networks is to provide coverage extension and higher bitrates for cell edge users. Similar to any other new nodes in the network, relays bring new challenges. One of these challenges concerns mobility. More specifically, back and forth data transmission between the Donor Evolved NodeB (DeNB) and the Relay (RN) during the handover can occur. For the services that are sensitive to packet loss, receiving all the packets at the destination is crucial. In cellular networks when the User Equipment (UE) detaches from the old cell and attaches to the new cell, it faces a short disruption. In the disruption time when the UE is not connected to anywhere, packets can be easily lost. To avoid the consequences of these packet losses, the data forwarding concept was developed and the lost packets in handover were identified and forwarded to the destination. The forwarded packets would be transmitted to the UE as it becomes attached to the new cell. In the networks using the relays all the packets should still transfer via the DeNB. If the UE is connected to the RN and is handed over to a new cell, the unacknowledged packets between the RN and the UE which are still in the RN buffer should be transmitted back to the DeNB and onwards to the target afterwards. Furthermore, the ongoing packets in S1 interface are transmitted through the old path until the path switch occurs. This data transfer from the DeNB to the RN and again back to the DeNB increases the latency and occupies the resources in the Un interface. In this thesis work the problem of back and forth forwarding is studied. Different solutions to overcome this challenge are proposed and simulations are performed to evaluate the proposals. The evaluated approaches showed up to considerable performance enhancement compared to the previous solutions

    Cooperative Communications: Network Design and Incremental Relaying

    Get PDF

    Towards addressing training data scarcity challenge in emerging radio access networks: a survey and framework

    Get PDF
    The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, fewshot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation
    corecore