17 research outputs found

    Combined Soft Hard Cooperative Spectrum Sensing in Cognitive Radio Networks

    Get PDF
    Providing some techniques to enhance the performance of spectrum sensing in cognitive radio systems while accounting for the cost and bandwidth limitations in practical scenarios is the main objective of this thesis. We focus on an essential element of cooperative spectrum sensing (CSS) which is the data fusion that combines the sensing results to make the final decision. Exploiting the advantage of the superior performance of the soft schemes and the low bandwidth of the hard schemes by incorporating them in cluster based CSS networks is achieved in two different ways. First, a soft-hard combination is employed to propose a hierarchical cluster based spectrum sensing algorithm. The proposed algorithm maximizes the detection performances while satisfying the probability of false alarm constraint. Simulation results of the proposed algorithm are presented and compared with existing algorithms over the Nakagami fading channel. Moreover, the results show that the proposed algorithm outperforms the existing algorithms. In the second part, a low complexity soft-hard combination scheme is suggested by utilizing both one-bit and two-bit schemes to balance between the required bandwidth and the detection performance by taking into account that different clusters undergo different conditions. The scheme allocates a reliability factor proportional to the detection rate to each cluster to combine the results at the Fusion center (FC) by extracting the results of the reliable clusters. Numerical results obtained have shown that a superior detection performance and a minimum overhead can be achieved simultaneously by combining one bit and two schemes at the intra-cluster level while assigning a reliability factor at the inter-cluster level

    MAC-PHY Frameworks For LTE And WiFi Networks\u27 Coexistence Over The Unlicensed Band

    Get PDF
    The main focus of this dissertation is to address these issues and to analyze the interactions between LTE and WiFi coexisting on the unlicensed spectrum. This can be done by providing some improvements in the first two communication layers in both technologies. Regarding the physical (PHY) layer, efficient spectrum sensing and data fusion techniques that consider correlated spectrum sensing readings at the LTE/WiFi users (sensors) are needed. Failure to consider such correlation has been a major shortcoming of the literature. This resulted in poorly performing spectrum sensing systems if such correlation is not considered in correlated-measurements environments

    Optimal Cooperative Spectrum Sensing for Cognitive Radio

    Get PDF
    The rapid increasing interest in wireless communication has led to the continuous development of wireless devices and technologies. The modern convergence and interoperability of wireless technologies has further increased the amount of services that can be provided, leading to the substantial demand for access to the radio frequency spectrum in an efficient manner. Cognitive radio (CR) an innovative concept of reusing licensed spectrum in an opportunistic manner promises to overcome the evident spectrum underutilization caused by the inflexible spectrum allocation. Spectrum sensing in an unswerving and proficient manner is essential to CR. Cooperation amongst spectrum sensing devices are vital when CR systems are experiencing deep shadowing and in a fading environment. In this thesis, cooperative spectrum sensing (CSS) schemes have been designed to optimize detection performance in an efficient and implementable manner taking into consideration: diversity performance, detection accuracy, low complexity, and reporting channel bandwidth reduction. The thesis first investigates state of the art spectrums sensing algorithms in CR. Comparative analysis and simulation results highlights the different pros, cons and performance criteria of a practical CSS scheme leading to the problem formulation of the thesis. Motivated by the problem of diversity performance in a CR network, the thesis then focuses on designing a novel relay based CSS architecture for CR. A major cooperative transmission protocol with low complexity and overhead - Amplify and Forward (AF) cooperative protocol and an improved double energy detection scheme in a single relay and multiple cognitive relay networks are designed. Simulation results demonstrated that the developed algorithm is capable of reducing the error of missed detection and improving detection probability of a primary user (PU). To improve spectrum sensing reliability while increasing agility, a CSS scheme based on evidence theory is next considered in this thesis. This focuses on a data fusion combination rule. The combination of conflicting evidences from secondary users (SUs) with the classical Dempster Shafter (DS) theory rule may produce counter-intuitive results when combining SUs sensing data leading to poor CSS performance. In order to overcome and minimise the effect of the counter-intuitive results, and to enhance performance of the CSS system, a novel state of the art evidence based decision fusion scheme is developed. The proposed approach is based on the credibility of evidence and a dissociability degree measure of the SUs sensing data evidence. Simulation results illustrate the proposed scheme improves detection performance and reduces error probability when compared to other related evidence based schemes under robust practcial scenarios. Finally, motivated by the need for a low complexity and minmum bandwidth reporting channels which can be significant in high data rate applications, novel CSS quantization schemes are proposed. Quantization methods are considered for a maximum likelihood estimation (MLE) and an evidence based CSS scheme. For the MLE based CSS, a novel uniform and optimal output entropy quantization scheme is proposed to provide fewer overhead complexities and improved throughput. While for the Evidence based CSS scheme, a scheme that quantizes the basic probability Assignment (BPA) data at each SU before being sent to the FC is designed. The proposed scheme takes into consideration the characteristics of the hypothesis distribution under diverse signal-to-noise ratio (SNR) of the PU signal based on the optimal output entropy. Simulation results demonstrate that the proposed quantization CSS scheme improves sensing performance with minimum number of quantized bits when compared to other related approaches

    Spectrum sensing for cognitive radios: Algorithms, performance, and limitations

    Get PDF
    Inefficient use of radio spectrum is becoming a serious problem as more and more wireless systems are being developed to operate in crowded spectrum bands. Cognitive radio offers a novel solution to overcome the underutilization problem by allowing secondary usage of the spectrum resources along with high reliable communication. Spectrum sensing is a key enabler for cognitive radios. It identifies idle spectrum and provides awareness regarding the radio environment which are essential for the efficient secondary use of the spectrum and coexistence of different wireless systems. The focus of this thesis is on the local and cooperative spectrum sensing algorithms. Local sensing algorithms are proposed for detecting orthogonal frequency division multiplexing (OFDM) based primary user (PU) transmissions using their autocorrelation property. The proposed autocorrelation detectors are simple and computationally efficient. Later, the algorithms are extended to the case of cooperative sensing where multiple secondary users (SUs) collaborate to detect a PU transmission. For cooperation, each SU sends a local decision statistic such as log-likelihood ratio (LLR) to the fusion center (FC) which makes a final decision. Cooperative sensing algorithms are also proposed using sequential and censoring methods. Sequential detection minimizes the average detection time while censoring scheme improves the energy efficiency. The performances of the proposed algorithms are studied through rigorous theoretical analyses and extensive simulations. The distributions of the decision statistics at the SU and the test statistic at the FC are established conditioned on either hypothesis. Later, the effects of quantization and reporting channel errors are considered. Main aim in studying the effects of quantization and channel errors on the cooperative sensing is to provide a framework for the designers to choose the operating values of the number of quantization bits and the target bit error probability (BEP) for the reporting channel such that the performance loss caused by these non-idealities is negligible. Later a performance limitation in the form of BEP wall is established for the cooperative sensing schemes in the presence of reporting channel errors. The BEP wall phenomenon is important as it provides the feasible values for the reporting channel BEP used for designing communication schemes between the SUs and the FC

    Doctor of Philosophy

    Get PDF
    dissertationCross layer system design represents a paradigm shift that breaks the traditional layer-boundaries in a network stack to enhance a wireless network in a number of di erent ways. Existing work has used the cross layer approach to optimize a wireless network in terms of packet scheduling, error correction, multimedia quality, power consumption, selection of modulation/coding and user experience, etc. We explore the use of new cross layer opportunities to achieve secrecy and e ciency of data transmission in wireless networks. In the rst part of this dissertation, we build secret key establishment methods for private communication between wireless devices using the spatio-temporal variations of symmetric-wireless channel measurements. We evaluate our methods on a variety of wireless devices, including laptops, telosB sensor nodes, and Android smartphones, with diverse wireless capabilities. We perform extensive measurements in real-world environments and show that our methods generate high entropy secret bits at a signi cantly faster rate in comparison to existing approaches. While the rst part of this dissertation focuses on achieving secrecy in wireless networks, the second part of this dissertation examines the use of special pulse shaping lters of the lterbank multicarrier (FBMC) physical layer in reliably transmitting data packets at a very high rate. We rst analyze the mutual interference power across subcarriers used by di erent transmitters. Next, to understand the impact of FBMC beyond the physical layer, we devise a distributed and adaptive medium access control protocol that coordinates data packet tra c among the di erent nodes in the network in a best e ort manner. Using extensive simulations, we show that FBMC consistently achieves an order-of-magnitude performance improvement over orthogonal frequency division multiplexing (OFDM) in several aspects, including packet transmission delays, channel access delays, and e ective data transmission rate available to each node in static indoor settings as well as in vehicular networks

    Algorithm Development and VLSI Implementation of Energy Efficient Decoders of Polar Codes

    Get PDF
    With its low error-floor performance, polar codes attract significant attention as the potential standard error correction code (ECC) for future communication and data storage. However, the VLSI implementation complexity of polar codes decoders is largely influenced by its nature of in-series decoding. This dissertation is dedicated to presenting optimal decoder architectures for polar codes. This dissertation addresses several structural properties of polar codes and key properties of decoding algorithms that are not dealt with in the prior researches. The underlying concept of the proposed architectures is a paradigm that simplifies and schedules the computations such that hardware is simplified, latency is minimized and bandwidth is maximized. In pursuit of the above, throughput centric successive cancellation (TCSC) and overlapping path list successive cancellation (OPLSC) VLSI architectures and express journey BP (XJBP) decoders for the polar codes are presented. An arbitrary polar code can be decomposed by a set of shorter polar codes with special characteristics, those shorter polar codes are referred to as constituent polar codes. By exploiting the homogeneousness between decoding processes of different constituent polar codes, TCSC reduces the decoding latency of the SC decoder by 60% for codes with length n = 1024. The error correction performance of SC decoding is inferior to that of list successive cancellation decoding. The LSC decoding algorithm delivers the most reliable decoding results; however, it consumes most hardware resources and decoding cycles. Instead of using multiple instances of decoding cores in the LSC decoders, a single SC decoder is used in the OPLSC architecture. The computations of each path in the LSC are arranged to occupy the decoder hardware stages serially in a streamlined fashion. This yields a significant reduction of hardware complexity. The OPLSC decoder has achieved about 1.4 times hardware efficiency improvement compared with traditional LSC decoders. The hardware efficient VLSI architectures for TCSC and OPLSC polar codes decoders are also introduced. Decoders based on SC or LSC algorithms suffer from high latency and limited throughput due to their serial decoding natures. An alternative approach to decode the polar codes is belief propagation (BP) based algorithm. In BP algorithm, a graph is set up to guide the beliefs propagated and refined, which is usually referred to as factor graph. BP decoding algorithm allows decoding in parallel to achieve much higher throughput. XJBP decoder facilitates belief propagation by utilizing the specific constituent codes that exist in the conventional factor graph, which results in an express journey (XJ) decoder. Compared with the conventional BP decoding algorithm for polar codes, the proposed decoder reduces the computational complexity by about 40.6%. This enables an energy-efficient hardware implementation. To further explore the hardware consumption of the proposed XJBP decoder, the computations scheduling is modeled and analyzed in this dissertation. With discussions on different hardware scenarios, the optimal scheduling plans are developed. A novel memory-distributed micro-architecture of the XJBP decoder is proposed and analyzed to solve the potential memory access problems of the proposed scheduling strategy. The register-transfer level (RTL) models of the XJBP decoder are set up for comparisons with other state-of-the-art BP decoders. The results show that the power efficiency of BP decoders is improved by about 3 times

    Algorithm Development and VLSI Implementation of Energy Efficient Decoders of Polar Codes

    Get PDF
    With its low error-floor performance, polar codes attract significant attention as the potential standard error correction code (ECC) for future communication and data storage. However, the VLSI implementation complexity of polar codes decoders is largely influenced by its nature of in-series decoding. This dissertation is dedicated to presenting optimal decoder architectures for polar codes. This dissertation addresses several structural properties of polar codes and key properties of decoding algorithms that are not dealt with in the prior researches. The underlying concept of the proposed architectures is a paradigm that simplifies and schedules the computations such that hardware is simplified, latency is minimized and bandwidth is maximized. In pursuit of the above, throughput centric successive cancellation (TCSC) and overlapping path list successive cancellation (OPLSC) VLSI architectures and express journey BP (XJBP) decoders for the polar codes are presented. An arbitrary polar code can be decomposed by a set of shorter polar codes with special characteristics, those shorter polar codes are referred to as constituent polar codes. By exploiting the homogeneousness between decoding processes of different constituent polar codes, TCSC reduces the decoding latency of the SC decoder by 60% for codes with length n = 1024. The error correction performance of SC decoding is inferior to that of list successive cancellation decoding. The LSC decoding algorithm delivers the most reliable decoding results; however, it consumes most hardware resources and decoding cycles. Instead of using multiple instances of decoding cores in the LSC decoders, a single SC decoder is used in the OPLSC architecture. The computations of each path in the LSC are arranged to occupy the decoder hardware stages serially in a streamlined fashion. This yields a significant reduction of hardware complexity. The OPLSC decoder has achieved about 1.4 times hardware efficiency improvement compared with traditional LSC decoders. The hardware efficient VLSI architectures for TCSC and OPLSC polar codes decoders are also introduced. Decoders based on SC or LSC algorithms suffer from high latency and limited throughput due to their serial decoding natures. An alternative approach to decode the polar codes is belief propagation (BP) based algorithm. In BP algorithm, a graph is set up to guide the beliefs propagated and refined, which is usually referred to as factor graph. BP decoding algorithm allows decoding in parallel to achieve much higher throughput. XJBP decoder facilitates belief propagation by utilizing the specific constituent codes that exist in the conventional factor graph, which results in an express journey (XJ) decoder. Compared with the conventional BP decoding algorithm for polar codes, the proposed decoder reduces the computational complexity by about 40.6%. This enables an energy-efficient hardware implementation. To further explore the hardware consumption of the proposed XJBP decoder, the computations scheduling is modeled and analyzed in this dissertation. With discussions on different hardware scenarios, the optimal scheduling plans are developed. A novel memory-distributed micro-architecture of the XJBP decoder is proposed and analyzed to solve the potential memory access problems of the proposed scheduling strategy. The register-transfer level (RTL) models of the XJBP decoder are set up for comparisons with other state-of-the-art BP decoders. The results show that the power efficiency of BP decoders is improved by about 3 times

    Técnicas para melhorar a eficiência do sensoriamento colaborativo em redes 5G para áreas remotas

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2020.A revolução dos smartphones em 2007 iniciou a um processo de crescimento exponencial da demanda por serviços de telefonia móvel. O aumento da demanda sem contrapartida da oferta, dependente do espectro disponível provoca uma queda na qualidade dos serviços prestados. As técnicas que usam Rádios cognitivos e acesso dinâmico ao espectro são con- sideradas fundamentais para otimizar a utilização do espectro e aumentar a quantidade de banda disponível para as redes 5G, ao permitir acesso oportunístico ao espectro licenciado ocioso. Diversos estudos apontam a subutilização de bandas, especialmente longe das grandes cidades, em que há menor demanda e menor incentivo econômico para a instalação de infraestrutura por parte das operadoras. Esse comportamento é incentivado devido ao processo de licenciamento de bandas em blocos e alocação estática do espectro, em que uma operadora licencia uma banda e junto a ela fica encarregada por dar cobertura a uma área atrelada à licença, enquanto pequenas operadoras locais ficam completamente de fora dos leilões e são impedidas de competir neste mercado. O acesso dinâmico ao espectro depende de informações que garantam a identificação de transmissões no canal candidato, afim de se reduzir interferência ao detentor da licença do canal. Algumas das técnicas mais comuns para se detectar a ocupação do canal via senso- riamento do espectro são carrier-sense e detecção de energia, dependendo da largura do canal. O sensoriamento colaborativo melhora a capacidade de detecção de uso do canal quando comparado com o sensoriamento individual, visto que diversifica geograficamente a informação disponível para análise. A qualidade do sensoriamento colaborativo depende não só dos sensoriamentos individuais recebidos, mais também da técnica que consolida ou executa a fusão desses resultados. Existem diversos algoritmos de fusão, cada um com vantagens e desvantagens. Algumas das técnicas de fusão clássicas são baseadas em votação k-em-n, em que k sensoriamentos indicando ocupação do canal resultam em uma fusão indicando ocupação do canal. A fusão 1-em-N, OU lógico, resulta em um número alto de falsos positivos, detectando ocupação do canal mesmo quando está desocupado, enquanto minimiza falsos negativos e a não detecção do canal de fato ocupado. Por fim, é parte do ciclo de sensoriamento colaborativo filtrar sensoriamentos de usuários maliciosos que desejam perturbar não só o resultado do sensoriamento colab- orativo como o funcionamento da rede. No caso de uma fusão simples como OU lógico, um único nó malicioso é capaz de inviabilizar por completo o uso oportunístico do es- pectro ao transmitir resultados falsos indicando que o canal está ocupado quando de fato está livre. Diante essa problemática, neste trabalho são propostas duas técnicas para melhorar os resultados do sensoriamento colaborativo, a saber : (1) uma técnica baseada em cadeias de Markov que aplicada aos resultados de sensoriamentos individuais, reduz falsos positivos e falsos negativos, além de reduzir o envio de mensagens de controle ; (2) uma técnica baseada na média harmônica para filtragem de resultados de sensoriamentos individuais recebidos, descartando sensoriamentos de nós mais distantes das fontes de interferência, protegendo de ataques Bizantinos. Ambas as técnicas são avaliadas em cenários de 5G na área rural, em que encontra-se a maior porção de bandas do espectro subutilizadas, candidatas ao acesso oportunístico. A fim de permitir a avaliação das técnicas propostas, foram realizadas diversas alter- ações para o modelo de pilha de rede LTE implementado no simulador de redes a nível de sistemas ns-3. As alterações incluem os procedimentos de sensoriamento do espectro individual feito pelos dispositivos de usuários (UEs), a transmissão dos resultados para o ponto de acesso (eNodeB), a fusão dos resultados recebidos e utilização do resultado de fusão no escalonamento de recursos para os dispositivos. Os sensoriamentos individu- ais são obtidos a partir de curvas de probabilidade de detecção e probabilidade de falsos positivos feitos através de medições em experimentos ou através de simulações a nível de camada física-enlace. As curvas são carregadas durante a configuração inicial da simu- lação, sendo interpoladas conforme necessário. As curvas podem ser tanto baseadas em distância euclidiana quanto em relação sinal ruído e interferência (SINR). O sensoria- mento individual consiste em utilizar a probabilidade de detecção relacionada a um dado valor de SNR ou de distância euclidiana é utilizada para gerar uma amostra aleatória a partir de um gerador com distribuição de Bernoulli. O procedimento se repete a cada 1 milissegundo no ciclo padrão de indicação do subquadro LTE. A técnica baseada em cadeias de Markov se baseia em um Teorema Central do Limite, em que a média de um certo número de amostras uniformemente distribuídas tende a se aproximar ou ao valor real da distribuição de probabilidade fonte ou ao valor central da distribuição. Em outras palavras, ao amostrar uniformemente uma distribuição de- sconhecida com número suficiente de amostras, encontra-se uma boa aproximação para o valor real que é procurado. Este princípio é aplicado para o sensoriamento individual do espectro, em que o valor do último sensoriamento é comparado com o resultado atual, e quando idêntico aumenta o grau de certeza de que este resultado é de fato o real. Quando os resultados diferem, o grau de certeza é perdido. Quando um dado limiar de certeza é ultrapassado, o resultado do sensoriamento que é de fato transmitido para o eNodeB é substituído pelo valor do último sensoriamento. A modelagem deste processo estocástico binomial é feita baseado no lançamento de N moedas, em que apenas o caso em que N resultados iguais consecutivos levam à troca do valor transmitido, sendo facilmente modelado como uma cadeia de Markov de N − 1 estados. Já a técnica baseada em média harmônica se baseia no fato de que as estações próx- imas das fontes de interferência são mais confiáveis que estações distantes, baseando-se nas curvas de probabilidade de detecção. Com isto, é necessário eliminar os resultados de sensoriamentos informados por usuários maliciosos com alguma informação adicional que sirva de prova que seu sensoriamento reportado é falso. Uma das maneiras de se mitigar informações falsas é utilizando a média harmônica dos CQIs reportados, permitindo iden- tificar UEs mais afetados pela fonte de interferência e descartar todos os resultados por UEs pouco afetadas, mais afastadas da fonte. Para poder se confiar no CQI reportado pelos UEs, é necessário medir a quantidade de retransmissões feitas para cada uma delas. Uma taxa de retransmissões próxima de 10% indica um CQI adequado, enquanto taxas próximas de 0% indicam CQI reportado abaixo do real e taxas acima de 10% indicam CQI reportado acima do real. O limiar de retransmissões é definido nos padrões 3GPP. A avaliação das propostas foi feita em duas partes: primeira parte com a validação do modelo proposto para o sensoriamento colaborativo no modelo do padrão LTE do simulador, e a segunda parte avaliando o desempenho das técnicas propostas. Durante a validação, foi confirmado o comportamento esperado do sensoriamento colaborativo (sensoriamentos individuais, transmissão dos resultados e fusões) em termos de taxas de falsos positivos e taxas de falsos negativos quando comparado com os modelos matemáticos. Na avaliação do desempenho das técnicas propostas foram avaliadas acurácia, taxas de falso positivos e taxas de falsos negativos. Em ambos os casos, foram utilizados cenários inspirados em zonas rurais, com: baixo número de nós (10, 20, 50, 100); uma célula com 50 quilômetros de raio; canal de 20 MHz na banda 5 com portadora em 870 MHz; eNodeB transmitindo à 53 dBm; UEs transmitindo à 23 dBm; eNodeB e UEs com antenas com 9 dBi de ganho; detentor da licença do canal (PU) transmitindo à 40 dBm ou 35 dBm; um PU por subcanal de 5 MHz; algoritmos de fusão simples. O cenário de validação foi pouco realístico, com UEs dispersas ao longo de um certo raio fixo de distância do PU, garantindo uma mesma probabilidade de detecção para todos os UEs. Os cenários de avaliação das técnicas foram separados em dois conjuntos, um menos realístico com dispersão aleatória pela célula, outro mais realístico com dispersão aleatória dos PUs pela célula e dispersão aleatória de grupos de UEs pela célula, formando clusters de UESs Os resultados mostram que as técnicas propostas aumentam a acurácia em relação à técnica clássica de fusão de resultados do sensoriamento colaborativo (fusão OU lógico, ou 1-em-N), reduzindo falsos positivos em até 790 vezes, de 63.23% para 0.08% no cenário com dispersão aleatória dos UEs e sem atacantes. Neste mesmo cenário houve um aumento de 0% para 0.47% do número de falsos negativos, sem impactar severamente o detentor da licença do canal. Nos cenários com atacantes, todas as fusões simples apresentam resultados ruins, com ou sem a técnica das cadeias de Markov, até 100% de falsos positivos, inviabilizando o acesso oportunístico. Já a técnica da média harmônica apresenta bom grau de proteção contra atacantes, em especial nos cenários com mais dispositivos. Sem a técnica baseada em Markov e no cenário com 100 UEs, dos quais 10 atacantes, conseguiu reduzir falsos positivos da fusão OU de 100% para 60%, sem aumentar significativamente o número de falsos negativos. Quando as duas técnicas são combinadas, o número de falsos positivos cai para 5% enquanto falsos negativos sobem para 18%. Nos cenários com menos UEs e com clusters, falsos negativos são consistentemente mais altos, porém superiores às fusões 2-em-N, 3-em-N e E utilizando a técnica de Markov no cenário sem atacantes. Em todos os cenários, a técnica baseada em cadeias de Markov também reduziu a taxa média de notificação dos quadros em 2 ordens de grandeza, economizando banda do canal de controle licenciado. Esses resultados permitem concluir que ambas as técnicas são efetivas para o cenário rural para a qual foram propostas. Também se depreende que o número de estados da cadeia de Markov e da técnica da média harmônica podem ser alterados para se trocar alcance da detecção por certeza da detecção e nível de proteção contra atacantes por falsos negativos, respectivamente. Como trabalhos futuros, cabem a adaptação da técnica para: incluir cenários urbanos, mais densos, utilizando técnicas de amostragem; utilização de técnicas de localização (e.g. Time-of-Arrival, Angle-of-Arrival) para segmentação da célula em setores; melhorar a técnica da média harmônica para reduzir falsos negativos mantendo o mesmo nível de proteção contra atacantes.The smartphone revolution of 2007 started an exponential demand growth of mobile con- nectivity. The ever increasing demand requires an increase in supply, which is depends in the amount of available spectrum. The amount of available spectrum however is limited, curving supply growth and reducing the quality of services perceived by the users. Cogni- tive radio and dynamic spectrum access are essential to increase the spectrum utilization and amount of available bandwidth in 5G networks, by opportunistically accessing unused licensed spectrum. The dynamic spectrum access depends on channel information that guarantees the detection of transmissions in the candidate channel, as a means of reducing interference to the channel licensee. The collaborative spectrum sensing increases the channel usage detection capacity when compared to individual spectrum sensing, as there is more geographically diverse information for analysis and decision-making. The quality of the collaborative sensing depends not only on the individual sensing that feeds information into it, but also on the technique that fuses those results into the final sensing result. Two techniques to improve the collaborative spectrum sensing results are proposed in this dissertation: (1) a technique based in Markov chains to smooth consecutive individual spectrum sensing results, reducing both false positives and false negatives, while enabling the reduction of sensing reports by skipping sensing reports with the same results; (2) a technique based in the harmonic mean of the channel quality indicator, used to filter the received individual spectrum sensing, discarding nodes far from the source of interference, mitigating against Byzantine attacks. Both techniques are evaluated in rural 5G scenarios, which are the best place to use opportunistic access due to the amount of unutilized and underused spectrum bands. In order to evaluate the proposed techniques, a set of modifications to the LTE net- work stack model of the ns-3 system-level simulator is proposed. The modifications include a complete collaborative sensing cycle, including: the individual spectrum sensing pro- cedure, performed by user equipment’s (UEs); the transmission of control messages to the access point (eNodeB), the fusion of the received results and utilization of the free spectrum for the UEs. The individual spectrum sensing is performed by interpolating probability of detection curves and false positive probability, which are produced either by experimental measurements or by link-layer simulations. The evaluation of the proposals was made in two parts: first part focusing on the validating the collaborative spectrum sensing cycle implementation and integration to the LTE model; second part focusing on the performance of the proposed techniques. The collaborative spectrum sensing cycle (individual sensing, sensing report and fusion) was validated and closely follows the mathematical model. The evaluation of the techniques included accuracy of the fused result, false positive and false negative rates. The results show the techniques are effective in increasing the accuracy of the collab- orative sensing when compared to the standalone classic fusion techniques (OR fusion, or 1-out-of-n). There were reductions in false positives rates of up to 790 times, from 63.23% to 0.08% in the scenario with randomized dispersion of UEs across the cell and without attackers. In the same scenario, the false negatives increased from 0% to 0.47%, which does not severely impact the licensee with interference. All classic fusions behave very poorly in scenarios with attackers, with and without the Markov chain technique. False positive rates soar to as high as 100%, making the opportunistic access impossible. The harmonic mean-based technique reduces the false positives, providing good protec- tion against attackers especially in scenarios with more UEs. The harmonic mean alone reduced false positives for the OR fusion from 100% to 60% without significantly impact- ing false negatives in the scenario with 100 UEs and 10 attackers. When both techniques are used together, the rate of false positives fall to 5% while false negatives increase to 18%. Scenarios with less UEs and distributed in clusters tend to have higher false negative rates when both techniques are used, but false positives are consistently lower than other classical fusions (e.g. 2-out-of-N, 3-out-of-N and AND). The Markov chain technique effectively reduced the sensing report rate by 2 orders of magnitude, saving up scarce control bandwidth. These results allow us to conclude that the both techniques are effective for the rural scenario they were proposed

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    corecore