172 research outputs found

    Statistical characterization of correlation-based time/frequency synchronizers for OFDM

    Get PDF
    Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted as a modulation format for reliable digital communication over multipath fading channels, e.g. IEEE 802.11g WiFi networks, as well as broadband wireline channels, e.g. DSL modems. However, its robustness to channel impairments comes at the cost of increased sensitivity to symbol timing and carrier frequency offset errors, and thus requires more complex synchronization methods than conventional single-carrier modulation formats. In this thesis, a class of synchronization methods based upon the intrinsic autocorrelation structure of the OFDM signal is studied from a statistical perspective. In particular, the reasons for the existence of irreducible time and frequency offset estimation errors in the limit of increasing signal-to-noise ratio (SNR) are investigated for correlator-based synchronizers for the non-fading channel case and several fading channel models. It is demonstrated that the primary source of irreducible synchronization errors at high SNR is the natural random distribution of signal energy in the cyclic prefix of the OFDM symbol. Comparisons of the distribution of correlator output magnitude between the non-fading and fading channel cases demonstrates that fading skews the distribution with respect to the non-fading case. A potential mechanism for reducing the effect of innate signal energy variability, correlator output windowed averaging, is studied from the perspective of its influence on the distribution of interpeak intervals in the temporal correlator output signal. While improved performance is realized through averaging for the non-fading channel case, this technique is not as effective for fading channels. In either instance, the windowed averaging method increases the latency of the synchronization process and thus introduces delay in the overall demodulation process

    Doctor of Philosophy

    Get PDF
    dissertationCross layer system design represents a paradigm shift that breaks the traditional layer-boundaries in a network stack to enhance a wireless network in a number of di erent ways. Existing work has used the cross layer approach to optimize a wireless network in terms of packet scheduling, error correction, multimedia quality, power consumption, selection of modulation/coding and user experience, etc. We explore the use of new cross layer opportunities to achieve secrecy and e ciency of data transmission in wireless networks. In the rst part of this dissertation, we build secret key establishment methods for private communication between wireless devices using the spatio-temporal variations of symmetric-wireless channel measurements. We evaluate our methods on a variety of wireless devices, including laptops, telosB sensor nodes, and Android smartphones, with diverse wireless capabilities. We perform extensive measurements in real-world environments and show that our methods generate high entropy secret bits at a signi cantly faster rate in comparison to existing approaches. While the rst part of this dissertation focuses on achieving secrecy in wireless networks, the second part of this dissertation examines the use of special pulse shaping lters of the lterbank multicarrier (FBMC) physical layer in reliably transmitting data packets at a very high rate. We rst analyze the mutual interference power across subcarriers used by di erent transmitters. Next, to understand the impact of FBMC beyond the physical layer, we devise a distributed and adaptive medium access control protocol that coordinates data packet tra c among the di erent nodes in the network in a best e ort manner. Using extensive simulations, we show that FBMC consistently achieves an order-of-magnitude performance improvement over orthogonal frequency division multiplexing (OFDM) in several aspects, including packet transmission delays, channel access delays, and e ective data transmission rate available to each node in static indoor settings as well as in vehicular networks

    Opportunistic Angle of Arrival Estimation in Impaired Scenarios

    Get PDF
    This work if focused on the analysis and the development of Angle of Arrival (AoA) radio localization methods. The radio positioning system considered is constituted by a radio source and by a receiving array of antennas. The positioning algorithms treated in this work are designed to have a passive and opportunistic approach. The opportunistic attribute implies that the radio localization algorithms are designed to provide the AoA estimation with nearly-zero information on the transmitted signals. No training sequences or waveforms custom designed for localization are taken into account. The localization is termed passive since there is no collaboration between the transmitter and the receiver during the localization process. Then, the algorithms treated in this work are designed to eavesdrop already existing communication signals and to locate their radio source with nearly-zero knowledge of the signal and without the collaboration of the transmitting node. First of all, AoA radio localization algorithms can be classified in terms of involved signals (narrowband or broadband), antenna array pattern (L-shaped, circular, etc.), signal structure (sinusoidal, training sequences, etc.), Differential Time of Arrival (D-ToA) / Differential Phase of Arrival (D-PoA) and collaborative/non collaborative. Than, the most detrimental effects for radio communications are treated: the multipath (MP) channels and the impaired hardware. A geometric model for the MP is analysed and implemented to test the robustness of the proposed methods. The effects of MP on the received signals statistics from the AoA estimation point-of-view are discussed. The hardware impairments for the most common components are introduced and their effects in the AoA estimation process are analysed. Two novel algorithms that exploits the AoA from signal snapshots acquired sequentially with a time division approach are presented. The acquired signals are QAM waveforms eavesdropped from a pre-existing communication. The proposed methods, namely Constellation Statistical Pattern IDentification and Overlap (CSP-IDO) and Bidimensional CSP-IDO (BCID), exploit the probability density function (pdf) of the received signals to obtain the D-PoA. Both CSP-IDO and BCID use the statistical pattern of received signals exploiting the transmitter statistical signature. Since the presence of hardware impairments modify the statistical pattern of the received signals, CSP-IDO and BCID are able to exploit it to improve the performance with respect to (w.r.t.) the ideal case. Since the proposed methods can be used with a switched antenna architecture they are implementable with a reduced hardware contrariwise to synchronous methods like MUltiple SIgnal Classification (MUSIC) that are not applicable. Then, two iterative AoA estimation algorithms for the dynamic tracking of moving radio sources are implemented. Statistical methods, namely PF, are used to implement the iterative tracking of the AoA from D-PoA measures in two different scenarios: automotive and Unmanned Aerial Vehicle (UAV). The AoA tracking of an electric car signalling with a IEEE 802.11p-like standard is implemented using a test-bed and real measures elaborated with a the proposed Particle Swarm Adaptive Scattering (PSAS) algorithm. The tracking of a UAV moving in the 3D space is investigated emulating the UAV trajectory using the proposed Confined Area Random Aerial Trajectory Emulator (CARATE) algorithm

    Development and performance evaluation of a multistatic radar system

    Get PDF
    Multistatic radar systems are of emerging interest as they can exploit spatial diversity, enabling improved performance and new applications. Their development is being fuelled by advances in enabling technologies in such fields as communications and Digital Signal Processing (DSP). Such systems differ from typical modern active radar systems through consisting of multiple spatially diverse transmitter and receiver sites. Due to this spatial diversity, these systems present challenges in managing their operation as well as in usefully combining the multiple sources of information to give an output to the radar operator. In this work, a novel digital Commercial Off-The-Shelf (COTS) based coherent multistatic radar system designed at University College London, named ‘NetRad’, has been developed to produce some of the first published experimental results, investigating the challenges of operating such a system, and determining what level of performance might be achievable. Full detail of the various stages involved in the combination of data from the component transmitter-receiver pairs within a multistatic system is investigated, and many of the practical issues inherent are discussed. Simulation and subsequent experimental verification of several centralised and decentralised detection algorithms in terms of localisation (resolution and parameter estimation) of targets was undertaken. The computational cost of the DSP involved in multistatic data fusion is also considered. This gave a clear demonstration of several of the benefits of multistatic radar. Resolution of multiple targets that would have been unresolvable in a conventional monostatic system was shown. Targets were also shown to be plotted as two-dimensional vector position and velocities from use of time delay and Doppler shift information only. A range of targets were used including some such as walking people which were particularly challenging due to the variability of Radar Cross Section (RCS). Performance improvements were found to be dependant on the type of multistatic radar, method of data fusion and target characteristics in question. It is likely that future work will look to further explore the optimisation of multistatic radar for the various measures of performance identified and discussed in this work

    FPGA-based DOCSIS upstream demodulation

    Get PDF
    In recent years, the state-of-the-art in field programmable gate array (FPGA) technology has been advancing rapidly. Consequently, the use of FPGAs is being considered in many applications which have traditionally relied upon application-specific integrated circuits (ASICs). FPGA-based designs have a number of advantages over ASIC-based designs, including lower up-front engineering design costs, shorter time-to-market, and the ability to reconfigure devices in the field. However, ASICs have a major advantage in terms of computational resources. As a result, expensive high performance ASIC algorithms must be redesigned to fit the limited resources available in an FPGA. Concurrently, coaxial cable television and internet networks have been undergoing significant upgrades that have largely been driven by a sharp increase in the use of interactive applications. This has intensified demand for the so-called upstream channels, which allow customers to transmit data into the network. The format and protocol of the upstream channels are defined by a set of standards, known as DOCSIS 3.0, which govern the flow of data through the network. Critical to DOCSIS 3.0 compliance is the upstream demodulator, which is responsible for the physical layer reception from all customers. Although upstream demodulators have typically been implemented as ASICs, the design of an FPGA-based upstream demodulator is an intriguing possibility, as FPGA-based demodulators could potentially be upgraded in the field to support future DOCSIS standards. Furthermore, the lower non-recurring engineering costs associated with FPGA-based designs could provide an opportunity for smaller companies to compete in this market. The upstream demodulator must contain complicated synchronization circuitry to detect, measure, and correct for channel distortions. Unfortunately, many of the synchronization algorithms described in the open literature are not suitable for either upstream cable channels or FPGA implementation. In this thesis, computationally inexpensive and robust synchronization algorithms are explored. In particular, algorithms for frequency recovery and equalization are developed. The many data-aided feedforward frequency offset estimators analyzed in the literature have not considered intersymbol interference (ISI) caused by micro-reflections in the channel. It is shown in this thesis that many prominent frequency offset estimation algorithms become biased in the presence of ISI. A novel high-performance frequency offset estimator which is suitable for implementation in an FPGA is derived from first principles. Additionally, a rule is developed for predicting whether a frequency offset estimator will become biased in the presence of ISI. This rule is used to establish a channel excitation sequence which ensures the proposed frequency offset estimator is unbiased. Adaptive equalizers that compensate for the ISI take a relatively long time to converge, necessitating a lengthy training sequence. The convergence time is reduced using a two step technique to seed the equalizer. First, the ISI equivalent model of the channel is estimated in response to a specific short excitation sequence. Then, the estimated channel response is inverted with a novel algorithm to initialize the equalizer. It is shown that the proposed technique, while inexpensive to implement in an FPGA, can decrease the length of the required equalizer training sequence by up to 70 symbols. It is shown that a preamble segment consisting of repeated 11-symbol Barker sequences which is well-suited to timing recovery can also be used effectively for frequency recovery and channel estimation. By performing these three functions sequentially using a single set of preamble symbols, the overall length of the preamble may be further reduced

    On detection of OFDM signals for cognitive radio applications

    Get PDF
    As the requirement for wireless telecommunications services continues to grow, it has become increasingly important to ensure that the Radio Frequency (RF) spectrum is managed efficiently. As a result of the current spectrum allocation policy, it has been found that portions of RF spectrum belonging to licensed users are often severely underutilised, at particular times and geographical locations. Awareness of this problem has led to the development of Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) as possible solutions. In one variation of the shared-use model for DSA, it is proposed that the inefficient use of licensed spectrum could be overcome by enabling unlicensed users to opportunistically access the spectrum when the licensed user is not transmitting. In order for an unlicensed device to make decisions, it must be aware of its own RF environment and, therefore, it has been proposed that DSA could been abled using CR. One approach that has be identified to allow the CR to gain information about its operating environment is spectrum sensing. An interesting solution that has been identified for spectrum sensing is cyclostationary detection. This property refers to the inherent periodic nature of the second order statistics of many communications signals. One of the most common modulation formats in use today is Orthogonal Frequency Division Multiplexing (OFDM), which exhibits cyclostationarity due to the addition of a Cyclic Prefix (CP). This thesis examines several statistical tests for cyclostationarity in OFDM signals that may be used for spectrum sensing in DSA and CR. In particular, focus is placed on statistical tests that rely on estimation of the Cyclic Autocorrelation Function (CAF). Based on splitting the CAF into two complex component functions, several new statistical tests are introduced and are shown to lead to an improvement in detection performance when compared to the existing algorithms. The performance of each new algorithm is assessed in Additive White Gaussian Noise (AWGN), impulsive noise and when subjected to impairments such as multipath fading and Carrier Frequency Offset (CFO). Finally, each algorithm is targeted for Field Programmable Gate Array (FPGA) implementation using a Xilinx 7 series device. In order to keep resource costs to a minimum, it is suggested that the new algorithms are implemented on the FPGA using hardware sharing, and a simple mathematical re-arrangement of certain tests statistics is proposed to circumvent a costly division operation.As the requirement for wireless telecommunications services continues to grow, it has become increasingly important to ensure that the Radio Frequency (RF) spectrum is managed efficiently. As a result of the current spectrum allocation policy, it has been found that portions of RF spectrum belonging to licensed users are often severely underutilised, at particular times and geographical locations. Awareness of this problem has led to the development of Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) as possible solutions. In one variation of the shared-use model for DSA, it is proposed that the inefficient use of licensed spectrum could be overcome by enabling unlicensed users to opportunistically access the spectrum when the licensed user is not transmitting. In order for an unlicensed device to make decisions, it must be aware of its own RF environment and, therefore, it has been proposed that DSA could been abled using CR. One approach that has be identified to allow the CR to gain information about its operating environment is spectrum sensing. An interesting solution that has been identified for spectrum sensing is cyclostationary detection. This property refers to the inherent periodic nature of the second order statistics of many communications signals. One of the most common modulation formats in use today is Orthogonal Frequency Division Multiplexing (OFDM), which exhibits cyclostationarity due to the addition of a Cyclic Prefix (CP). This thesis examines several statistical tests for cyclostationarity in OFDM signals that may be used for spectrum sensing in DSA and CR. In particular, focus is placed on statistical tests that rely on estimation of the Cyclic Autocorrelation Function (CAF). Based on splitting the CAF into two complex component functions, several new statistical tests are introduced and are shown to lead to an improvement in detection performance when compared to the existing algorithms. The performance of each new algorithm is assessed in Additive White Gaussian Noise (AWGN), impulsive noise and when subjected to impairments such as multipath fading and Carrier Frequency Offset (CFO). Finally, each algorithm is targeted for Field Programmable Gate Array (FPGA) implementation using a Xilinx 7 series device. In order to keep resource costs to a minimum, it is suggested that the new algorithms are implemented on the FPGA using hardware sharing, and a simple mathematical re-arrangement of certain tests statistics is proposed to circumvent a costly division operation
    corecore