24 research outputs found
Reconfigurable Intelligent Surfaces for Smart Cities: Research Challenges and Opportunities
The concept of Smart Cities has been introduced as a way to benefit from the
digitization of various ecosystems at a city level. To support this concept,
future communication networks need to be carefully designed with respect to the
city infrastructure and utilization of resources. Recently, the idea of 'smart'
environment, which takes advantage of the infrastructure for better performance
of wireless networks, has been proposed. This idea is aligned with the recent
advances in design of reconfigurable intelligent surfaces (RISs), which are
planar structures with the capability to reflect impinging electromagnetic
waves toward preferred directions. Thus, RISs are expected to provide the
necessary flexibility for the design of the 'smart' communication environment,
which can be optimally shaped to enable cost- and energy-efficient signal
transmissions where needed. Upon deployment of RISs, the ecosystem of the Smart
Cities would become even more controllable and adaptable, which would
subsequently ease the implementation of future communication networks in urban
areas and boost the interconnection among private households and public
services. In this paper, we describe our vision of the application of RISs in
future Smart Cities. In particular, the research challenges and opportunities
are addressed. The contribution paves the road to a systematic design of
RIS-assisted communication networks for Smart Cities in the years to come.Comment: Submitted for possible publication in IEEE Open Journal of the
Communications Societ
Polar-Coded OFDM with Index Modulation
Polar codes, as the first error-correcting codes with an explicit construction to provably achieve thesymmetric capacity of memoryless channels, which are constructed based on channel polarization, have recently become a primary contender in communication networks for achieving tighter requirements with relatively low complexity. As one of the contributions in this thesis, three modified polar decoding schemes are proposed. These schemes include enhanced versions of successive cancellation-flip (SC-F), belief propagation (BP), and sphere decoding (SD). The proposed SC-F utilizes novel potential incorrect bits selection criteria and stack to improve its error correction performance. Next, to make the decoding performance of BP better, permutation and feedback structure are utilized. Then, in order to reduce the complexity without compromising performance, a SD by using novel decoding strategies according to modified path metric (PM) and radius extension is proposed. Additionally, to solve the problem that BP has redundant iterations, a new stopping criterion based on bit different ratio (BDR) is proposed. According to the simulation results and mathematical proof, all proposed schemes can achieve corresponding performance improvement or complexity reduction compared with existing works. Beside applying polar coding, to achieve a reliable and flexible transmission in a wireless communication system, a modified version of orthogonal frequency division multiplexing (OFDM) modulation based on index modulation, called OFDM-in-phase/quadrature-IM (OFDM-I/Q-IM), is applied. This modulation scheme can simultaneously improve spectral efficiency and bit-error rate (BER) performance with great flexibility in design and implementation. Hence, OFDM-I/Q-IM is considered as a potential candidate in the new generation of cellular networks. As the main contribution in this work, a polar-coded OFDM-I/Q-IM system is proposed. The general design guidelines for overcoming the difficulties associated with the application of polar codes in OFDM-I/Q-IM are presented. In the proposed system, at the transmitter, we employ a random frozen bits appending scheme which not only makes the polar code compatible with OFDM-I/Q-IM but also improves the BER performance of the system. Furthermore, at the receiver, it is shown that the \textit{a posteriori} information for each index provided by the index detector is essential for the iterative decoding of polar codes by the BP algorithm. Simulation results show that the proposed polar-coded OFDM-I/Q-IM system outperforms its OFDM counterpart in terms of BER performance
FPGA-based DOCSIS upstream demodulation
In recent years, the state-of-the-art in field programmable gate array (FPGA) technology has been advancing rapidly. Consequently, the use of FPGAs is being considered in many applications which have traditionally relied upon application-specific integrated circuits (ASICs). FPGA-based designs have a number of advantages over ASIC-based designs, including lower up-front engineering design costs, shorter time-to-market, and the ability to reconfigure devices in the field. However, ASICs have a major advantage in terms of computational resources. As a result, expensive high performance ASIC algorithms must be redesigned to fit the limited resources available in an FPGA.
Concurrently, coaxial cable television and internet networks have been undergoing significant upgrades that have largely been driven by a sharp increase in the use of interactive applications. This has intensified demand for the so-called upstream channels, which allow customers to transmit data into the network. The format and protocol of the upstream channels are defined by a set of standards, known as DOCSIS 3.0, which govern the flow of data through the network.
Critical to DOCSIS 3.0 compliance is the upstream demodulator, which is responsible for the physical layer reception from all customers. Although upstream demodulators have typically been implemented as ASICs, the design of an FPGA-based upstream demodulator is an intriguing possibility, as FPGA-based demodulators could potentially be upgraded in the field to support future DOCSIS standards. Furthermore, the lower non-recurring engineering costs associated with FPGA-based designs could provide an opportunity for smaller companies to compete in this market.
The upstream demodulator must contain complicated synchronization circuitry to detect, measure, and correct for channel distortions. Unfortunately, many of the synchronization algorithms described in the open literature are not suitable for either upstream cable channels or FPGA implementation. In this thesis, computationally inexpensive and robust synchronization algorithms are explored. In particular, algorithms for frequency recovery and equalization are developed.
The many data-aided feedforward frequency offset estimators analyzed in the literature have not considered intersymbol interference (ISI) caused by micro-reflections in the channel. It is shown in this thesis that many prominent frequency offset estimation algorithms become biased in the presence of ISI. A novel high-performance frequency offset estimator which is suitable for implementation in an FPGA is derived from first principles. Additionally, a rule is developed for predicting whether a frequency offset estimator will become biased in the presence of ISI. This rule is used to establish a channel excitation sequence which ensures the proposed frequency offset estimator is unbiased.
Adaptive equalizers that compensate for the ISI take a relatively long time to converge, necessitating a lengthy training sequence. The convergence time is reduced using a two step technique to seed the equalizer. First, the ISI equivalent model of the channel is estimated in response to a specific short excitation sequence. Then, the estimated channel response is inverted with a novel algorithm to initialize the equalizer. It is shown that the proposed technique, while inexpensive to implement in an FPGA, can decrease the length of the required equalizer training sequence by up to 70 symbols.
It is shown that a preamble segment consisting of repeated 11-symbol Barker sequences which is well-suited to timing recovery can also be used effectively for frequency recovery and channel estimation. By performing these three functions sequentially using a single set of preamble symbols, the overall length of the preamble may be further reduced
Study of the best linear approximation of nonlinear systems with arbitrary inputs
System identification is the art of modelling of a process (physical, biological,
etc.) or to predict its behaviour or output when the environment condition
or parameter changes. One is modelling the input-output relationship of a system,
for example, linking temperature of a greenhouse (output) to the sunlight intensity
(input), power of a car engine (output) with fuel injection rate (input). In linear
systems, changing an input parameter will result in a proportional increase in the
system output. This is not the case in a nonlinear system. Linear system identification
has been extensively studied, more so than nonlinear system identification.
Since most systems are nonlinear to some extent, there is significant interest in this
topic as industrial processes become more and more complex.
In a linear dynamical system, knowing the impulse response function of a
system will allow one to predict the output given any input. For nonlinear systems
this is not the case. If advanced theory is not available, it is possible to approximate
a nonlinear system by a linear one. One tool is the Best Linear Approximation
(Bla), which is an impulse response function of a linear system that minimises the
output differences between its nonlinear counterparts for a given class of input. The
Bla is often the starting point for modelling a nonlinear system. There is extensive
literature on the Bla obtained from input signals with a Gaussian probability
density function (p.d.f.), but there has been very little for other kinds of inputs.
A Bla estimated from Gaussian inputs is useful in decoupling the linear dynamics
from the nonlinearity, and in initialisation of parameterised models. As Gaussian
inputs are not always practical to be introduced as excitations, it is important to
investigate the dependence of the Bla on the amplitude distribution in more detail.
This thesis studies the behaviour of the Bla with regards to other types of signals,
and in particular, binary sequences where a signal takes only two levels. Such an
input is valuable in many practical situations, for example where the input actuator
is a switch or a valve and hence can only be turned either on or off.
While it is known in the literature that the Bla depends on the amplitude
distribution of the input, as far as the author is aware, there is a lack of comprehensive
theoretical study on this topic. In this thesis, the Blas of discrete-time
time-invariant nonlinear systems are studied theoretically for white inputs with an arbitrary amplitude distribution, including Gaussian and binary sequences. In doing
so, the thesis offers answers to fundamental questions of interest to system engineers,
for example: 1) How the amplitude distribution of the input and the system
dynamics affect the Bla? 2) How does one quantify the difference between the
Bla obtained from a Gaussian input and that obtained from an arbitrary input?
3) Is the difference (if any) negligible? 4) What can be done in terms of experiment
design to minimise such difference?
To answer these questions, the theoretical expressions for the Bla have been
developed for both Wiener-Hammerstein (Wh) systems and the more general Volterra
systems. The theory for the Wh case has been verified by simulation and physical
experiments in Chapter 3 and Chapter 6 respectively. It is shown in Chapter 3
that the difference between the Gaussian and non-Gaussian Bla’s depends on the
system memory as well as the higher order moments of the non-Gaussian input.
To quantify this difference, a measure called the Discrepancy Factor—a measure of
relative error, was developed. It has been shown that when the system memory is
short, the discrepancy can be as high as 44.4%, which is not negligible. This justifies
the need for a method to decrease such discrepancy. One method is to design a random
multilevel sequence for Gaussianity with respect to its higher order moments,
and this is discussed in Chapter 5.
When estimating the Bla even in the absence of environment and measurement
noise, the nonlinearity inevitably introduces nonlinear distortions—deviations
from the Bla specific to the realisation of input used. This also explains why more
than one realisation of input and averaging is required to obtain a good estimate of
the Bla. It is observed that with a specific class of pseudorandom binary sequence
(Prbs), called the maximum length binary sequence (Mlbs or the m-sequence), the
nonlinear distortions appear structured in the time domain. Chapter 4 illustrates
a simple and computationally inexpensive method to take advantage this structure
to obtain better estimates of the Bla—by replacing mean averaging by median
averaging.
Lastly, Chapters 7 and 8 document two independent benchmark studies separate
from the main theoretical work of the thesis. The benchmark in Chapter 7 is
concerned with the modelling of an electrical Wh system proposed in a special session
of the 15th International Federation of Automatic Control (Ifac) Symposium on
System Identification (Sysid) 2009 (Schoukens, Suykens & Ljung, 2009). Chapter 8
is concerned with the modelling of a ‘hyperfast’ Peltier cooling system first proposed
in the U.K. Automatic Control Council (Ukacc) International Conference
on Control, 2010 (Control 2010)
Efficient complementary sequences-based architectures and their application to ranging measurements
Premio Extraordinario de Doctorado de la UAH en 2015En las últimas décadas, los sistemas de medición de distancias se han beneficiado de los avances en el área de las comunicaciones inalámbricas. En los sistemas basados en CDMA (Code-Division Multiple-Access), las propiedades de correlación de las secuencias empleadas juegan un papel fundamental en el desarrollo de dispositivos de medición de altas prestaciones. Debido a las sumas ideales de correlaciones aperiódicas, los conjuntos de secuencias complementarias, CSS (Complementary Sets of Sequences), son ampliamente utilizados en sistemas CDMA. En ellos, es deseable el uso de arquitecturas eficientes que permitan generar y correlar CSS del mayor número de secuencias y longitudes posibles. Por el término eficiente se hace referencia a aquellas arquitecturas que requieren menos operaciones por muestra de entrada que con una arquitectura directa. Esta tesis contribuye al desarrollo de arquitecturas eficientes de generación/correlación de CSS y derivadas, como son las secuencias LS (Loosely Synchronized) y GPC (Generalized Pairwise Complementary), que permitan aumentar el número de longitudes y/o de secuencias disponibles. Las contribuciones de la tesis pueden dividirse en dos bloques: En primer lugar, las arquitecturas eficientes de generación/correlación para CSS binarios, derivadas en trabajos previos, son generalizadas al alfabeto multinivel (secuencias con valores reales) mediante el uso de matrices de Hadamard multinivel. Este planteamiento tiene dos ventajas: por un lado el aumento del número de longitudes que pueden generarse/correlarse y la eliminación de las limitaciones de las arquitecturas previas en el número de secuencias en el conjunto. Por otro lado, bajo ciertas condiciones, los parámetros de las arquitecturas generalizadas pueden ajustarse para generar/correlar eficientemente CSS binarios de mayor número de longitudes que con las arquitecturas eficientes previas. En segundo lugar, las arquitecturas propuestas son usadas para el desarrollo de nuevos algoritmos de generación/correlación de secuencias derivadas de CSS que reducen el número de operaciones por muestra de entrada. Finalmente, se presenta la aplicación de las secuencias estudiadas en un nuevo sistema de posicionamiento local basado en Ultra-Wideband y en un sistema de posicionamiento local basado en ultrasonidos
A Critical Review of Physical Layer Security in Wireless Networking
Wireless networking has kept evolving with additional features and increasing capacity. Meanwhile, inherent characteristics of wireless networking make it more vulnerable than wired networks. In this thesis we present an extensive and comprehensive review of physical layer security in wireless networking. Different from cryptography, physical layer security, emerging from the information theoretic assessment of secrecy, could leverage the properties of wireless channel for security purpose, by either enabling secret communication without the need of keys, or facilitating the key agreement process. Hence we categorize existing literature into two main branches, namely keyless security and key-based security. We elaborate the evolution of this area from the early theoretic works on the wiretap channel, to its generalizations to more complicated scenarios including multiple-user, multiple-access and multiple-antenna systems, and introduce not only theoretical results but practical implementations. We critically and systematically examine the existing knowledge by analyzing the fundamental mechanics for each approach. Hence we are able to highlight advantages and limitations of proposed techniques, as well their interrelations, and bring insights into future developments of this area
Design, Technologies and Applications of High Power Vacuum Electronic Devices from Microwave to THz Band
The last decade has contributed to the rapid progress in developing high-power microwave sources. This Special Issue aims to bring together information about the most striking theoretical and experimental results, new trends in development, remarkable modern applications, new demands in parameter enhancement, and future goals. Although only a tiny part of the achievements of recent years is included in this Issue, we hope that the presented articles will be useful for experts and students focusing on modern vacuum electronics
Entropy in Image Analysis II
Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
Causal Sampling, Compressing, and Channel Coding of Streaming Data
With the emergence of the Internet of Things, communication systems, such as those employed in distributed control and tracking scenarios, are becoming increasingly dynamic, interactive, and delay-sensitive. The data in such real-time systems arrive at the encoder progressively in a streaming fashion. An intriguing question is: what codes can transmit streaming data with both high reliability and low latency? Classical non-causal (block) encoding schemes can transmit data reliably but under the assumption that the encoder knows the entire data block before the transmission. While this is a realistic assumption in delay-tolerant systems, it is ill-suited to real-time systems due to the delay introduced by collecting data into a block. This thesis studies causal encoding: the encoder transmits information based on the causally received data while the data is still streaming in and immediately incorporates the newly received data into a continuing transmission on the fly.
This thesis investigates causal encoding of streaming data in three scenarios: causal sampling, causal lossy compressing, and causal joint source-channel coding (JSCC). In the causal sampling scenario, a sampler observes a continuous-time source process and causally decides when to transmit real-valued samples of it under a constraint on the average number of samples per second; an estimator uses the causally received samples to approximate the source process in real time. We propose a causal sampling policy that achieves the best tradeoff between the sampling frequency and the end-to-end real-time estimation distortion for a class of continuous Markov processes. In the causal lossy compressing scenario, the sampling frequency constraint in the causal sampling scenario is replaced by a rate constraint on the average number of bits per second. We propose a causal code that achieves the best causal distortion-rate tradeoff for the same class of processes. In the causal JSCC scenario, the noiseless channel and the continuous-time process in the previous scenarios are replaced by a discrete memoryless channel with feedback and a sequence of streaming symbols, respectively. We propose a causal joint sourcechannel code that achieves the maximum exponentially decaying rate of the error probability compatible with a given rate. Remarkably, the fundamental limits in the causal lossy compressing and the causal JSCC scenarios achieved by our causal codes are no worse than those achieved by the best non-causal codes. In addition to deriving the fundamental limits and presenting the causal codes that achieve the limits, we also show that our codes apply to control systems, are resilient to system deficiencies such as channel delay and noise, and have low complexities.</p