61 research outputs found

    Mathematical optimisation and signal processing techniques in wireless relay networks

    Get PDF
    With the growth of wireless networks such as sensor networks and mesh networks, the challenges of sustaining higher data rates and coverage, coupled with requirement for high quality of services, need to be addressed. The use of spatial diversity proves to be an attractive option due to its ability to significantly enhance network performance without additional bandwidth or transmission power. This thesis proposes the use of cooperative wireless relays to improvise spatial diversity in wireless sensor networks and wireless mesh networks. Cooperation in this context implies that the signals are exchanged between relays for optimal performance. The network gains realised using the proposed cooperative relays for signal forwarding are significantly large, advocating the utilisation of cooperation amongst relays. The work begins with proposing a minimum mean square error (MMSE) based relaying strategy that provides improvement in bit error rate. A simplified algorithm has been developed to calculate the roots of a polynomial equation. Following this work, a novel signal forwarding technique based on convex optimisation techniques is proposed which attains specific quality of services for end users with minimal transmission power at the relays. Quantisation of signals passed between relays has been considered in the optimisation framework. Finally, a reduced complexity scheme together with a more realistic algorithm incorporating per relay node power constraints is proposed. This optimisation framework is extended to a cognitive radio environment where relays in a secondary network forward signals without causing harmful interferences to primary network users.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Quantifying, generating and mitigating radio interference in Low-Power Wireless Networks

    Get PDF
    Doctoral Programme in Telecommunication - MAP-teleRadio interference a ects the performance of low-power wireless networks (LPWN), leading to packet loss and reduced energy-e ciency, among other problems. Reliability of communications is key to expand application domains for LPWN. Since most LPWN operate in the license-free Industrial Scienti c and Medical (ISM) bands and hence share the spectrum with other wireless technologies, addressing interference is an important challenge. In this context, we present JamLab: a low-cost infrastructure to augment existing LPWN testbeds with accurate interference generation in LPWN testbeds, useful to experimentally investigate the impact of interference on LPWN protocols. We investigate how interference in a shared wireless medium can be mitigated by performing wireless channel energy sensing in low-cost and low-power hardware. For this pupose, we introduce a novel channel quality metric|dubbed CQ|based on availability of the channel over time, which meaningfully quanti es interference. Using data collected from a number of Wi-Fi networks operating in a library building, we show that our metric has strong correlation with the Packet Reception Rate (PRR). We then explore dynamic radio resource adaptation techniques|namely packet size and error correction code overhead optimisations|based on instantaneous spectrum usage as quanti ed by our CQ metric. To conclude, we study emerging fast fading in the composite channel under constructive baseband interference, which has been recently introduced in low-power wireless networks as a promising technique. We show the resulting composite signal becomes vulnerable in the presence of noise, leading to signi cant deterioration of the link, whenever the carriers have similar amplitudes. Overall, our results suggest that the proposed tools and techniques have the potential to improve performance in LPWN operating in the unlicensed spectrum, improving coexistence while maintaining energy-e ciency. Future work includes implementation in next generation platforms, which provides superior computational capacity and more exible radio chip designs.A interferência de r adio afeta o desempenho das redes de comunicação sem o de baixo consumo - low-power wireless networks (LPWN), o que provoca perdas de pacotes, diminuição da e ciência energética, entre outros problemas. A contabilidade das comunicações e importante para a expansão e adoção das LPWN nos diversos domínios de potencial aplicação. Visto que a grande maioria das LPWN partilham o espectro radioelétrico com outras redes sem o, a interferência torna-se um desafio importante. Neste contexto, apresentamos o JamLab: uma infraestrutura de baixo custo para estender a funcionalidade dos ambientes laboratoriais para o estudo experimental do desempenho das LPWN sob interferência. Resultando, assim, numa ferramenta essencial para a adequada verificação dos protocolos de comunicações das LPWN. Para al em disso, a Tese introduz uma nova técnica para avaliar o ambiente radioelétrico e demostra a sua utilização para gerir recursos disponíveis no transceptor rádio, o que permite melhorar a fiabilidade das comunicações, nomeadamente nas plataformas de baixo consumo, garantindo e ciência energética. Assim, apresentamos uma nova métrica| denominada CQ - concebida especificamente para quantificar a qualidade do canal r adio, com base na sua disponibilidade temporal. Mediante dados adquiridos em v arias redes sem o Wi-Fi, instaladas no edifício de uma biblioteca universitária, demonstra-se que esta métrica tem um ótimo desempenho, evidenciando uma elevada correlação com a taxa de receção de pacotes. Investiga-se ainda a potencialidade da nossa métrica CQ para gerir dinamicamente recursos de radio como tamanho de pacote e taxa de correlação de erros dos códigos - baseado em medições instantâneas da qualidade do canal de radio. Posteriormente, estuda-se um modelo de canal composto, sob interferência construtiva de banda-base. A interferência construtiva de banda-base tem sido introduzida recentemente nas LPWN, evidenciando ser uma técnica prometedora no que diz respeito à baixa latência e à contabilidade das comunicações. Na Tese investiga-se o caso crítico em que o sinal composto se torna vulnerável na presença de ruído, o que acaba por deteriorar a qualidade da ligação, no caso em que as amplitudes das distintas portadoras presentes no receptor sejam similares. Finalmente, os resultados obtidos sugerem que as ferramentas e as técnicas propostas têm potencial para melhorar o desempenho das LPWN, num cenário de partilha do espectro radioelétrico com outras redes, melhorando a coexistência e mantendo e ciência energética. Prevê-se como trabalho futuro a implementação das técnicas propostas em plataformas de próxima geração, com maior flexibilidade e poder computacional para o processamento dos sinais rádio.This work was supported by FCT (Portuguese Foundation for Science and Technology) and by ESF (European Social Fund) through POPH (Portuguese Human Potential Operational Program), under PhD grant SFRH/BD/62198/2009; also by FCT under project ref. FCOMP-01-0124-FEDER-014922 (MASQOTS), and EU through the FP7 programme, under grant FP7-ICT-224053 (CONET)

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Spectrum measurement, sensing, analysis and simulation in the context of cognitive radio

    Get PDF
    The radio frequency (RF) spectrum is a scarce natural resource, currently regulated locally by national agencies. Spectrum has been assigned to different services and it is very difficult for emerging wireless technologies to gain access due to rigid spectmm policy and heavy opportunity cost. Current spectrum management by licensing causes artificial spectrum scarcity. Spectrum monitoring shows that many frequencies and times are unused. Dynamic spectrum access (DSA) is a potential solution to low spectrum efficiency. In DSA, an unlicensed user opportunistically uses vacant licensed spectrum with the help of cognitive radio. Cognitive radio is a key enabling technology for DSA. In a cognitive radio system, an unlicensed Secondary User (SU) identifies vacant licensed spectrum allocated to a Primary User (PU) and uses it without harmful interference to the PU. Cognitive radio increases spectrum usage efficiency while protecting legacy-licensed systems. The purpose of this thesis is to bring together a group of CR concepts and explore how we can make the transition from conventional radio to cognitive radio. Specific goals of the thesis are firstly the measurement of the radio spectrum to understand the current spectrum usage in the Humber region, UK in the context of cognitive radio. Secondly, to characterise the performance of cyclostationary feature detectors through theoretical analysis, hardware implementation, and real-time performance measurements. Thirdly, to mitigate the effect of degradation due to multipath fading and shadowing, the use of -wideband cooperative sensing techniques using adaptive sensing technique and multi-bit soft decision is proposed, which it is believed will introduce more spectral opportunities over wider frequency ranges and achieve higher opportunistic aggregate throughput.Understanding spectrum usage is the first step toward the future deployment of cognitive radio systems. Several spectrum usage measurement campaigns have been performed, mainly in the USA and Europe. These studies show locality and time dependence. In the first part of this thesis a spectrum usage measurement campaign in the Humber region, is reported. Spectrum usage patterns are identified and noise is characterised. A significant amount of spectrum was shown to be underutilized and available for the secondary use. The second part addresses the question: how can you tell if a spectrum channel is being used? Two spectrum sensing techniques are evaluated: Energy Detection and Cyclostationary Feature Detection. The performance of these techniques is compared using the measurements performed in the second part of the thesis. Cyclostationary feature detection is shown to be more robust to noise. The final part of the thesis considers the identification of vacant channels by combining spectrum measurements from multiple locations, known as cooperative sensing. Wideband cooperative sensing is proposed using multi resolution spectrum sensing (MRSS) with a multi-bit decision technique. Next, a two-stage adaptive system with cooperative wideband sensing is proposed based on the combination of energy detection and cyclostationary feature detection. Simulations using the system above indicate that the two-stage adaptive sensing cooperative wideband outperforms single site detection in terms of detection success and mean detection time in the context of wideband cooperative sensing

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.TM2019Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Cooperative Radio Communications for Green Smart Environments

    Get PDF
    The demand for mobile connectivity is continuously increasing, and by 2020 Mobile and Wireless Communications will serve not only very dense populations of mobile phones and nomadic computers, but also the expected multiplicity of devices and sensors located in machines, vehicles, health systems and city infrastructures. Future Mobile Networks are then faced with many new scenarios and use cases, which will load the networks with different data traffic patterns, in new or shared spectrum bands, creating new specific requirements. This book addresses both the techniques to model, analyse and optimise the radio links and transmission systems in such scenarios, together with the most advanced radio access, resource management and mobile networking technologies. This text summarises the work performed by more than 500 researchers from more than 120 institutions in Europe, America and Asia, from both academia and industries, within the framework of the COST IC1004 Action on "Cooperative Radio Communications for Green and Smart Environments". The book will have appeal to graduates and researchers in the Radio Communications area, and also to engineers working in the Wireless industry. Topics discussed in this book include: • Radio waves propagation phenomena in diverse urban, indoor, vehicular and body environments• Measurements, characterization, and modelling of radio channels beyond 4G networks• Key issues in Vehicle (V2X) communication• Wireless Body Area Networks, including specific Radio Channel Models for WBANs• Energy efficiency and resource management enhancements in Radio Access Networks• Definitions and models for the virtualised and cloud RAN architectures• Advances on feasible indoor localization and tracking techniques• Recent findings and innovations in antenna systems for communications• Physical Layer Network Coding for next generation wireless systems• Methods and techniques for MIMO Over the Air (OTA) testin
    • …
    corecore