24 research outputs found
Low Power Analog Processing for Ultra-High-Speed Receivers with RF Correlation
Ultra-high-speed data communication receivers (Rxs) conventionally require analog digital converters (ADC)s with high sampling rates which have design challenges in terms of adequate resolution and power. This leads to ultra-high-speed Rxs utilising expensive and bulky high-speed oscilloscopes which are extremely inefficient for demodulation, in terms of power and size. Designing energy-efficient mixed-signal and baseband units for ultra-high-speed Rxs requires a paradigm approach detailed in this paper that circumvents the use of power-hungry ADCs by employing low-power analog processing. The low-power analog Rx employs direct-demodulation with RF correlation using low-power comparators. The Rx is able to support multiple modulations with highest modulation of 16-QAM reported so far for direct-demodulation with RF correlation. Simulations using Matlab, Simulink R2020a® indicate sufficient symbol-error rate (SER) performance at a symbol rate of 8 GS/s for the 71 GHz Urban Micro Cell and 140 GHz indoor channels. Power analysis undertaken with current analog, hybrid and digital beamforming approaches requiring ADCs indicates considerable power savings. This novel approach can be adopted for ultra-high-speed Rxs envisaged for beyond fifth generation (B5G)/sixth generation (6G)/ terahertz (THz) communication without the power-hungry ADCs, leading to low-power integrated design solutions
Broadband Channel Based on Polar Codes At 2.3 GHz Frequency for 5G Networks in Digitalization Era
This research using a polar code and without polar codes -based broadband channel that is affected by human blockage using one of the 5G cellular network frequencies at 2.3 GHz, 99 MHz bandwidth, 128 blocks of Fast Fourier Transform (FFT) with Cyclic prefix-Orthogonal Frequency Division Multiplexing ( CP-OFDM) and Binary Shift Keying (BPSK) modulation. The use of high frequencies causes the technology to be sensitive to the surrounding environment and attenuation such as human blockage. The purpose of this research is to determine the performance results and analyze the BER parameters that use polar codes and without polar codes on 5G network broadband channels that are affected by human blockage. Broadband channel modeling on a 5G network is presented in a representative Power Delay Profile (PDP) with the influence of human blockage, which is obtained as many as 41 paths which have multiple delays of 10 ns on each path. This research also uses the scaling method on representative PDP because the use of FFT will produce 128 blocks, and the results of this scaling show that there are 9 lanes with multiple delays of 50 ns. The results of this study are close to the average Bit Error Rate (BER) of 10-4. BER performance without polar code is affected by human blockage requires Signal to Noise (SNR) of 30 dB, for theory BER on BPSK modulation requires SNR of 34.5 dB and BER performance using polar code only requires SNR of 23 dB. These results indicate that using a polar code can reduce or save power usage by 7 dB without a polar codes. Polar codes can minimize errors in the 5G network system, because polar codes are one of the strong codes and are one of the channel coding recommended by ITU to be applied to 5G network system
Ondas milimétricas e MIMO massivo para otimização da capacidade e cobertura de redes heterogeneas de 5G
Today's Long Term Evolution Advanced (LTE-A) networks cannot support
the exponential growth in mobile traffic forecast for the next decade. By
2020, according to Ericsson, 6 billion mobile subscribers worldwide are projected
to generate 46 exabytes of mobile data traffic monthly from 24 billion
connected devices, smartphones and short-range Internet of Things (IoT)
devices being the key prosumers. In response, 5G networks are foreseen
to markedly outperform legacy 4G systems. Triggered by the International
Telecommunication Union (ITU) under the IMT-2020 network initiative, 5G
will support three broad categories of use cases: enhanced mobile broadband
(eMBB) for multi-Gbps data rate applications; ultra-reliable and low latency
communications (URLLC) for critical scenarios; and massive machine
type communications (mMTC) for massive connectivity. Among the several
technology enablers being explored for 5G, millimeter-wave (mmWave)
communication, massive MIMO antenna arrays and ultra-dense small cell
networks (UDNs) feature as the dominant technologies. These technologies
in synergy are anticipated to provide the 1000_ capacity increase for 5G
networks (relative to 4G) through the combined impact of large additional
bandwidth, spectral efficiency (SE) enhancement and high frequency reuse,
respectively. However, although these technologies can pave the way towards
gigabit wireless, there are still several challenges to solve in terms of
how we can fully harness the available bandwidth efficiently through appropriate
beamforming and channel modeling approaches. In this thesis, we
investigate the system performance enhancements realizable with mmWave
massive MIMO in 5G UDN and cellular infrastructure-to-everything (C-I2X)
application scenarios involving pedestrian and vehicular users. As a critical
component of the system-level simulation approach adopted in this thesis,
we implemented 3D channel models for the accurate characterization of the
wireless channels in these scenarios and for realistic performance evaluation.
To address the hardware cost, complexity and power consumption of the
massive MIMO architectures, we propose a novel generalized framework for
hybrid beamforming (HBF) array structures. The generalized model reveals
the opportunities that can be harnessed with the overlapped subarray structures
for a balanced trade-o_ between SE and energy efficiently (EE) of 5G
networks. The key results in this investigation show that mmWave massive
MIMO can deliver multi-Gbps rates for 5G whilst maintaining energy-efficient operation of the network.As redes LTE-A atuais não são capazes de suportar o crescimento exponencial
de tráfego que está previsto para a próxima década. De acordo
com a previsão da Ericsson, espera-se que em 2020, a nível global, 6 mil
milhões de subscritores venham a gerar mensalmente 46 exa bytes de tráfego
de dados a partir de 24 mil milhões de dispositivos ligados à rede móvel,
sendo os telefones inteligentes e dispositivos IoT de curto alcance os principais
responsáveis por tal nível de tráfego. Em resposta a esta exigência,
espera-se que as redes de 5a geração (5G) tenham um desempenho substancialmente
superior às redes de 4a geração (4G) atuais. Desencadeado pelo
UIT (União Internacional das Telecomunicações) no âmbito da iniciativa
IMT-2020, o 5G irá suportar três grandes tipos de utilizações: banda larga
móvel capaz de suportar aplicações com débitos na ordem de vários Gbps;
comunicações de baixa latência e alta fiabilidade indispensáveis em cenários
de emergência; comunicações massivas máquina-a-máquina para conectividade
generalizada. Entre as várias tecnologias capacitadoras que estão a ser
exploradas pelo 5G, as comunicações através de ondas milimétricas, os agregados
MIMO massivo e as redes celulares ultradensas (RUD) apresentam-se
como sendo as tecnologias fundamentais. Antecipa-se que o conjunto
destas tecnologias venha a fornecer às redes 5G um aumento de capacidade
de 1000x através da utilização de maiores larguras de banda, melhoria da
eficiência espectral, e elevada reutilização de frequências respetivamente.
Embora estas tecnologias possam abrir caminho para as redes sem fios
com débitos na ordem dos gigabits, existem ainda vários desafios que têm
que ser resolvidos para que seja possível aproveitar totalmente a largura de
banda disponível de maneira eficiente utilizando abordagens de formatação
de feixe e de modelação de canal adequadas. Nesta tese investigamos a
melhoria de desempenho do sistema conseguida através da utilização de
ondas milimétricas e agregados MIMO massivo em cenários de redes celulares
ultradensas de 5a geração e em cenários 'infraestrutura celular-para-qualquer
coisa' (do inglês: cellular infrastructure-to-everything) envolvendo
utilizadores pedestres e veiculares. Como um componente fundamental das
simulações de sistema utilizadas nesta tese é o canal de propagação, implementamos modelos de canal tridimensional (3D) para caracterizar de
forma precisa o canal de propagação nestes cenários e assim conseguir uma
avaliação de desempenho mais condizente com a realidade. Para resolver os
problemas associados ao custo do equipamento, complexidade e consumo
de energia das arquiteturas MIMO massivo, propomos um modelo inovador
de agregados com formatação de feixe híbrida. Este modelo genérico revela
as oportunidades que podem ser aproveitadas através da sobreposição
de sub-agregados no sentido de obter um compromisso equilibrado entre
eficiência espectral (ES) e eficiência energética (EE) nas redes 5G. Os principais
resultados desta investigação mostram que a utilização conjunta de
ondas milimétricas e de agregados MIMO massivo possibilita a obtenção, em
simultâneo, de taxas de transmissão na ordem de vários Gbps e a operação
de rede de forma energeticamente eficiente.Programa Doutoral em Telecomunicaçõe
End-to-End Simulation of 5G mmWave Networks
Due to its potential for multi-gigabit and low latency wireless links,
millimeter wave (mmWave) technology is expected to play a central role in 5th
generation cellular systems. While there has been considerable progress in
understanding the mmWave physical layer, innovations will be required at all
layers of the protocol stack, in both the access and the core network.
Discrete-event network simulation is essential for end-to-end, cross-layer
research and development. This paper provides a tutorial on a recently
developed full-stack mmWave module integrated into the widely used open-source
ns--3 simulator. The module includes a number of detailed statistical channel
models as well as the ability to incorporate real measurements or ray-tracing
data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and
highly customizable, making it easy to integrate algorithms or compare
Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example.
The module is interfaced with the core network of the ns--3 Long Term Evolution
(LTE) module for full-stack simulations of end-to-end connectivity, and
advanced architectural features, such as dual-connectivity, are also available.
To facilitate the understanding of the module, and verify its correct
functioning, we provide several examples that show the performance of the
custom mmWave stack as well as custom congestion control algorithms designed
specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and
Tutorials (revised Jan. 2018
FPGA Acceleration of 3GPP Channel Model Emulator for 5G New Radio
The channel model is by far the most computing intensive part of the link level simulations of multiple-input and multiple-output (MIMO) fifth-generation new radio (5G NR) communication systems. Simulation effort further increases when using more realistic geometry-based channel models, such as the three-dimensional spatial channel model (3D-SCM). Channel emulation is used for functional and performance verification of such models in the network planning phase. These models use multiple finite impulse response (FIR) filters and have a very high degree of parallelism which can be exploited for accelerated execution on Field Programmable Gate Array (FPGA) and Graphics Processing Unit (GPU) platforms. This paper proposes an efficient re-configurable implementation of the 3rd generation partnership project (3GPP) 3D-SCM on FPGAs using a design flow based on high-level synthesis (HLS). It studies the effect of various HLS optimization techniques on the total latency and hardware resource utilization on Xilinx Alveo U280 and Intel Arria 10GX 1150 high-performance FPGAs, using in both cases the commercial HLS tools of the producer. The channel model accuracy is preserved using double precision floating point arithmetic. This work analyzes in detail the effort to target the FPGA platforms using HLS tools, both in terms of common parallelization effort (shared by both FPGAs), and in terms of platform-specific effort, different for Xilinx and Intel FPGAs. Compared to the baseline general-purpose central processing unit (CPU) implementation, the achieved speedups are 65X and 95X using the Xilinx UltraScale+ and Intel Arria FPGA platform respectively, when using a Double Data Rate (DDR) memory interface. The FPGA-based designs also achieved ~3X better performance compared to a similar technology node NVIDIA GeForce GTX 1070 GPU, while consuming ~4X less energy. The FPGA implementation speedup improves up to 173X over the CPU baseline when using the Xilinx UltraRAM (URAM) and High-Bandwidth Memory (HBM) resources, also achieving 6X lower latency and 12X lower energy consumption than the GPU implementation