243 research outputs found
Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last 5 Years
Timing and carrier synchronization is a fundamental requirement for any wireless communication system to work properly. Timing synchronization is the process by which a receiver node determines the correct instants of time at which to sample the incoming signal. Carrier synchronization is the process by which a receiver adapts the frequency and phase of its local carrier oscillator with those of the received signal. In this paper, we survey the literature over the last 5 years (2010–2014) and present a comprehensive literature review and classification of the recent research progress in achieving timing and carrier synchronization in single-input single-output (SISO), multiple-input multiple-output (MIMO), cooperative relaying, and multiuser/multicell interference networks. Considering both single-carrier and multi-carrier communication systems, we survey and categorize the timing and carrier synchronization techniques proposed for the different communication systems focusing on the system model assumptions for synchronization, the synchronization challenges, and the state-of-the-art synchronization solutions and their limitations. Finally, we envision some future research directions
Hybrid solutions to instantaneous MIMO blind separation and decoding: narrowband, QAM and square cases
Future wireless communication systems are desired to support high data rates and high quality transmission when considering the growing multimedia applications. Increasing the channel throughput leads to the multiple input and multiple output and blind equalization techniques in recent years. Thereby blind MIMO equalization has attracted a great interest.Both system performance and computational complexities play important roles in real time communications. Reducing the computational load and providing accurate performances are the main challenges in present systems. In this thesis, a hybrid method which can provide an affordable complexity with good performance for Blind Equalization in large constellation MIMO systems is proposed first. Saving computational cost happens both in the signal sep- aration part and in signal detection part. First, based on Quadrature amplitude modulation signal characteristics, an efficient and simple nonlinear function for the Independent Compo- nent Analysis is introduced. Second, using the idea of the sphere decoding, we choose the soft information of channels in a sphere, and overcome the so- called curse of dimensionality of the Expectation Maximization (EM) algorithm and enhance the final results simultaneously. Mathematically, we demonstrate in the digital communication cases, the EM algorithm shows Newton -like convergence.Despite the widespread use of forward -error coding (FEC), most multiple input multiple output (MIMO) blind channel estimation techniques ignore its presence, and instead make the sim- plifying assumption that the transmitted symbols are uncoded. However, FEC induces code structure in the transmitted sequence that can be exploited to improve blind MIMO channel estimates. In final part of this work, we exploit the iterative channel estimation and decoding performance for blind MIMO equalization. Experiments show the improvements achievable by exploiting the existence of coding structures and that it can access the performance of a BCJR equalizer with perfect channel information in a reasonable SNR range. All results are confirmed experimentally for the example of blind equalization in block fading MIMO systems
Hybrid Satellite-Terrestrial Communication Networks for the Maritime Internet of Things: Key Technologies, Opportunities, and Challenges
With the rapid development of marine activities, there has been an increasing
number of maritime mobile terminals, as well as a growing demand for high-speed
and ultra-reliable maritime communications to keep them connected.
Traditionally, the maritime Internet of Things (IoT) is enabled by maritime
satellites. However, satellites are seriously restricted by their high latency
and relatively low data rate. As an alternative, shore & island-based base
stations (BSs) can be built to extend the coverage of terrestrial networks
using fourth-generation (4G), fifth-generation (5G), and beyond 5G services.
Unmanned aerial vehicles can also be exploited to serve as aerial maritime BSs.
Despite of all these approaches, there are still open issues for an efficient
maritime communication network (MCN). For example, due to the complicated
electromagnetic propagation environment, the limited geometrically available BS
sites, and rigorous service demands from mission-critical applications,
conventional communication and networking theories and methods should be
tailored for maritime scenarios. Towards this end, we provide a survey on the
demand for maritime communications, the state-of-the-art MCNs, and key
technologies for enhancing transmission efficiency, extending network coverage,
and provisioning maritime-specific services. Future challenges in developing an
environment-aware, service-driven, and integrated satellite-air-ground MCN to
be smart enough to utilize external auxiliary information, e.g., sea state and
atmosphere conditions, are also discussed
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Protocol for Extreme Low Latency M2M Communication Networks
As technology evolves, more Machine to Machine (M2M) deployments and mission critical
services are expected to grow massively, generating new and diverse forms of data
traffic, posing unprecedented challenges in requirements such as delay, reliability, energy
consumption and scalability. This new paradigm vindicates a new set of stringent requirements
that the current mobile networks do not support. A new generation of mobile
networks is needed to attend to this innovative services and requirements - the The fifth
generation of mobile networks (5G) networks. Specifically, achieving ultra-reliable low
latency communication for machine to machine networks represents a major challenge,
that requires a new approach to the design of the Physical (PHY) and Medium Access
Control (MAC) layer to provide these novel services and handle the new heterogeneous
environment in 5G. The current LTE Advanced (LTE-A) radio access network orthogonality
and synchronization requirements are obstacles for this new 5G architecture, since
devices in M2M generate bursty and sporadic traffic, and therefore should not be obliged
to follow the synchronization of the LTE-A PHY layer. A non-orthogonal access scheme
is required, that enables asynchronous access and that does not degrade the spectrum.
This dissertation addresses the requirements of URLLC M2M traffic at the MAC layer.
It proposes an extension of the M2M H-NDMA protocol for a multi base station scenario
and a power control scheme to adapt the protocol to the requirements of URLLC. The
system and power control schemes performance and the introduction of more base stations
are analyzed in a system level simulator developed in MATLAB, which implements
the MAC protocol and applies the power control algorithm.
Results showed that with the increase in the number of base stations, delay can be
significantly reduced and the protocol supports more devices without compromising
delay or reliability bounds for Ultra-Reliable and Low Latency Communication (URLLC),
while also increasing the throughput. The extension of the protocol will enable the study
of different power control algorithms for more complex scenarios and access schemes that
combine asynchronous and synchronous access
Anti-Collision Adaptations of BLE Active Scanning for Dense IoT Tracking Applications
Bluetooth low energy (BLE) is one of most promising technologies to enable the Internet-of-Things (IoT) paradigm. The BLE neighbor discovery process (NDP) based on active scanning may be the core of multiple IoT applications in which a large and varying number of users/devices/tags must be detected in a short period of time. Minimizing the discovery latency and maximizing the number of devices that can be discovered in a limited time are challenging issues due to collisions between frames sent by advertisers and scanners. The mechanism for resolution of collisions between scanners has a great impact on the achieved performance, but backoff in NDP has been poorly studied so far. This paper includes a detailed analysis of backoff in NDP, identifies and studies the factors involved in the process, reveals the limitations and problems presented by the algorithm suggested by the specifications and proposes simple and practical adaptations on scanner functionality. They are easily compatible with the current definitions of the standard, which together with a new proposal for the backoff scheme, may significantly improve the discovery latencies and, thus, the probability of discovering a large number of devices in high density scenarios
Lightly synchronized Multipacket Reception in Machine-Type Communications Networks
Machine Type Communication (MTC) applications were designed to monitor and control
elements of our surroundings and environment. MTC applications have a different
set of requirements compared to the traditional communication devices, with Machine to
Machine (M2M) data being mostly short, asynchronous, bursty and sometimes requiring end-to-end delays below 1ms. With the growth of MTC, the new generation of mobile communications has to be able to present different types of services with very different requirements, i.e. the same network has to be capable of "supplying" connection to the user that just wants to download a video or use social media, allowing at the same time MTC that has completely different requirements, without deteriorating both experiences.
The challenges associated to the implementation of MTC require disruptive changes at
the Physical (PHY) and Medium Access Control (MAC) layers, that lead to a better use of the spectrum available. The orthogonality and synchronization requirements of the PHY layer of current Long Term Evolution Advanced (LTE-A) radio access network (based on glsofdm and Single Carrier Frequency Domain Equalization (SC-FDE)) are obstacles for this new 5th Generation (5G) architecture. Generalized Frequency Division Multiplexing (GFDM) and other modulation techniques were proposed as candidates for the 5G PHY layer, however they also suffer from visible degradation when the transmitter and receiver are not synchronized, leading to a poor performance when collisions occur in an asynchronous MAC layer. This dissertation addresses the requirements of M2M traffic at the MAC layer applying multipacket reception (MPR) techniques to handle the bursty nature of the traffic and synchronization tones and optimized back-off approaches to reduce the delay. It proposes a new MAC protocol and analyses its performance analytically considering an SC-FDE modulation. The models are validated using a system level cross-layer simulator developed in MATLAB, which implements the MAC protocol and applies PHY layer performance models. The results show that the MAC’s latency depends mainly on the number of users and the load of each user, and can be controlled using these two parameters
- …