451 research outputs found

    Energy Efficiency in Communications and Networks

    Get PDF
    The topic of "Energy Efficiency in Communications and Networks" attracts growing attention due to economical and environmental reasons. The amount of power consumed by information and communication technologies (ICT) is rapidly increasing, as well as the energy bill of service providers. According to a number of studies, ICT alone is responsible for a percentage which varies from 2% to 10% of the world power consumption. Thus, driving rising cost and sustainability concerns about the energy footprint of the IT infrastructure. Energy-efficiency is an aspect that until recently was only considered for battery driven devices. Today we see energy-efficiency becoming a pervasive issue that will need to be considered in all technology areas from device technology to systems management. This book is seeking to provide a compilation of novel research contributions on hardware design, architectures, protocols and algorithms that will improve the energy efficiency of communication devices and networks and lead to a more energy proportional technology infrastructure

    Physical and Link Layer Implications in Vehicle Ad Hoc Networks

    Get PDF
    Vehicle Ad hoc Networks (V ANET) have been proposed to provide safety on the road and deliver road traffic information and route guidance to drivers along with commercial applications. However the challenges facing V ANET are numerous. Nodes move at high speeds, road side units and basestations are scarce, the topology is constrained by the road geometry and changes rapidly, and the number of nodes peaks suddenly in traffic jams. In this thesis we investigate the physical and link layers of V ANET and propose methods to achieve high data rates and high throughput. For the physical layer, we examine the use of Vertical BLAST (VB LAST) systems as they provide higher capacities than single antenna systems in rich fading environments. To study the applicability of VB LAST to VANET, a channel model was developed and verified using measurement data available in the literature. For no to medium line of sight, VBLAST systems provide high data rates. However the performance drops as the line of sight strength increases due to the correlation between the antennas. Moreover, the performance of VBLAST with training based channel estimation drops as the speed increases since the channel response changes rapidly. To update the channel state information matrix at the receiver, a channel tracking algorithm for flat fading channels was developed. The algorithm updates the channel matrix thus reducing the mean square error of the estimation and improving the bit error rate (BER). The analysis of VBLAST-OFDM systems showed they experience an error floor due to inter-carrier interference (lCI) which increases with speed, number of antennas transmitting and number of subcarriers used. The update algorithm was extended to VBLAST -OFDM systems and it showed improvements in BER performance but still experienced an error floor. An algorithm to equalise the ICI contribution of adjacent subcarriers was then developed and evaluated. The ICI equalisation algorithm reduces the error floor in BER as more subcarriers are equalised at the expense of more hardware complexity. The connectivity of V ANET was investigated and it was found that for single lane roads, car densities of 7 cars per communication range are sufficient to achieve high connectivity within the city whereas 12 cars per communication range are required for highways. Multilane roads require higher densities since cars tend to cluster in groups. Junctions and turns have lower connectivity than straight roads due to disconnections at the turns. Although higher densities improve the connectivity and, hence, the performance of the network layer, it leads to poor performance at the link layer. The IEEE 802.11 p MAC layer standard under development for V ANET uses a variant of Carrier Sense Multiple Access (CSMA). 802.11 protocols were analysed mathematically and via simulations and the results prove the saturation throughput of the basic access method drops as the number of nodes increases thus yielding very low throughput in congested areas. RTS/CTS access provides higher throughput but it applies only to unicast transmissions. To overcome the limitations of 802.11 protocols, we designed a protocol known as SOFT MAC which combines Space, Orthogonal Frequency and Time multiple access techniques. In SOFT MAC the road is divided into cells and each cell is allocated a unique group of subcarriers. Within a cell, nodes share the available subcarriers using a combination of TDMA and CSMA. The throughput analysis of SOFT MAC showed it has superior throughput compared to the basic access and similar to the RTS/CTS access of 802.11

    Cognitive Radio Systems

    Get PDF
    Cognitive radio is a hot research area for future wireless communications in the recent years. In order to increase the spectrum utilization, cognitive radio makes it possible for unlicensed users to access the spectrum unoccupied by licensed users. Cognitive radio let the equipments more intelligent to communicate with each other in a spectrum-aware manner and provide a new approach for the co-existence of multiple wireless systems. The goal of this book is to provide highlights of the current research topics in the field of cognitive radio systems. The book consists of 17 chapters, addressing various problems in cognitive radio systems

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    A Cross Layer Routing Protocol for OFDMA Based Mobile Ad Hoc Networks.

    Get PDF
    PhDMobile ad hoc networks are of growing interest because of their unique characteristics and advantages in many practical applications. QoS provision acts as a major challenge in the routing protocol design in the real-world mobile ad hoc networks, especially for the real-time services. OFDM is a new technology which has many advantages over the other modulation schemes. Because of its prominent features, many popular wireless standards have adopted it as physical layer modulation, such as IEEE 802.11 series, WiMAX, 3GPP LTE etc, and it is extended to multiuser environment known as OFDMA. So far none of the existing ad hoc routing protocols fully account for the OFDMA based mobile ad hoc networks. In this thesis, a QoS routing protocol is proposed for OFDMA based mobile ad hoc networks. A signal strength-based sub-channel allocation scheme is proposed in the routing protocol aiming to reduce the signalling overhead and cochannel interference. The performance of the proposed routing protocol is compared with other alternative proposals through simulations using OPNET simulator. Moreover, a partial time synchronization and a null subcarrier based frequency synchronization algorithms are also proposed for OFDMA based ad hoc network to further support and facilitate the proposed sub-channel allocation scheme and routing protocol

    Protocol for Extreme Low Latency M2M Communication Networks

    Get PDF
    As technology evolves, more Machine to Machine (M2M) deployments and mission critical services are expected to grow massively, generating new and diverse forms of data traffic, posing unprecedented challenges in requirements such as delay, reliability, energy consumption and scalability. This new paradigm vindicates a new set of stringent requirements that the current mobile networks do not support. A new generation of mobile networks is needed to attend to this innovative services and requirements - the The fifth generation of mobile networks (5G) networks. Specifically, achieving ultra-reliable low latency communication for machine to machine networks represents a major challenge, that requires a new approach to the design of the Physical (PHY) and Medium Access Control (MAC) layer to provide these novel services and handle the new heterogeneous environment in 5G. The current LTE Advanced (LTE-A) radio access network orthogonality and synchronization requirements are obstacles for this new 5G architecture, since devices in M2M generate bursty and sporadic traffic, and therefore should not be obliged to follow the synchronization of the LTE-A PHY layer. A non-orthogonal access scheme is required, that enables asynchronous access and that does not degrade the spectrum. This dissertation addresses the requirements of URLLC M2M traffic at the MAC layer. It proposes an extension of the M2M H-NDMA protocol for a multi base station scenario and a power control scheme to adapt the protocol to the requirements of URLLC. The system and power control schemes performance and the introduction of more base stations are analyzed in a system level simulator developed in MATLAB, which implements the MAC protocol and applies the power control algorithm. Results showed that with the increase in the number of base stations, delay can be significantly reduced and the protocol supports more devices without compromising delay or reliability bounds for Ultra-Reliable and Low Latency Communication (URLLC), while also increasing the throughput. The extension of the protocol will enable the study of different power control algorithms for more complex scenarios and access schemes that combine asynchronous and synchronous access

    Learning and identification of wireless network internode dynamics using software defined radio

    Get PDF
    The recently developed paradigm of cognitive radio wireless devices has been developed with the goal of achieving more customizable and efficient spectrum utilization of commonly used wireless frequency bands. The primary focus of such spectrum utilization approaches has been to discern occupancies and vacancies over portions of the wireless spectrum without necessarily identifying how specific radio frequency (RF) devices contribute to the temporal dynamics of these occupancy patterns within the spectrum. The aim of this thesis is to utilize a hidden semi-Markov model (HSMM) statistical analysis to infer the individual occupancy patterns of specific users from wireless RF observation traces. It is proposed that the HSMM approach for RF device characterization over time may act as a first step towards performing a more complete characterization of the RF spectrum in which the inferred traffic patterns may demonstrate the coexistence of multiple networks, the specific devices comprising each distinct network, and the level of mutual interference between the component networks resultant from such coexistence. The first main portion of this thesis is the development of a Bayesian learning framework for HSMM characterization of the wireless RF observations, with occupancy periods and each individual RF device being classified as distinct states in the HSMM. The traditional HSMM approach is supplemented with the concept of the hierarchical Dirichlet random process to achieve a minimal number of states needed to effectively capture each distinct device, without the need for strong a priori assumptions regarding the number of devices seen in the RF trace prior to computational analysis. The second portion of the thesis utilizes user-programmed cognitive radios to construct a real-time software-defined RF network environment emulation testbed to assess the accuracy of the HSMM characterization. Finally, the HSMM algorithm is tested on wireless devices operating under an actual implementation of the ubiquitous IEEE 802.11 wireless standard

    Characterization, Avoidance and Repair of Packet Collisions in Inter-Vehicle Communication Networks

    Get PDF
    This work proposes a combined and accurate simulation of wireless channel, physical layer and networking aspects in order to bridge the gaps between the corresponding research communities. The resulting high fidelity simulations enable performance optimizations across multiple layers, and are used in the second part of this thesis to evaluate the impact of fast-fading channel characteristics on Carrier-Sense Multiple Access, and to quantify the benefit of successive interference cancellation
    corecore