110 research outputs found

    Studio e realizzazione di un'architettura VLSI di un processore per l'implementazione dell'algoritmo FFT

    Get PDF
    Poiché lo standard di connessione 5G è utilizzato da un numero sempre crescente di dispositivi e si sta evolvendo per soddisfare nuove esigenze e requisiti, è diventato fondamentale studiare e progettare nuovi trasmettitori e ricevitori più veloci ed efficienti. Un ruolo fondamentale nella connessione 5G è svolto dal multiplexing a divisione di frequenza ortogonale (OFDM), una metodologia di modulazione. Poiché la demodulazione è basata sulla trasformata di Fourier, lo scopo di questa tesi è realizzare un processore in grado di implementare algoritmi FFT e DFT su sequenze di lunghezza variabile che rispetti i criteri dello standard 5G. Per fare ciò, è stata prima condotta un'analisi del rapporto dell'Unione internazionale delle telecomunicazioni ITU-R M.2410-0 per definire i requisiti minimi per il processore. Successivamente, uno studio dello stato dell'arte per dispositivi simili ha portato allo sviluppo di un'architettura VLSI adatta all'applicazione. Una versione RTL dell'architettura è stata implementata in VHDL e testata.Since the 5G connection standard is utilized by a rising number of devices and is evolving to meet new needs and requirements, it has become crucial to study and design new, faster, and more efficient transmitters and receivers. A fundamental role in the 5G connection is played by Orthogonal frequency-division multiplexing (OFDM), an encoding methodology. Since the demodulation is based on the Fourier Transform, the purpose of this thesis is to realize a processor capable of implementing FFT and DFT algorithms on variable length sequences that complies with the 5G standard criteria. In order to do so, first an analysis of the International Telecommunication Union report ITU-R M.2410-0 has been conducted to define the minimum requirements for the processor. Then, a study of the state of the art for similar devices led to the development of a VLSI architecture suitable for the application. An RTL version of the architecture has been implemented in VHDL and tested

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    Real-Time Waveform Prototyping

    Get PDF
    Mobile Netzwerke der fünften Generation zeichen sich aus durch vielfältigen Anforderungen und Einsatzszenarien. Drei unterschiedliche Anwendungsfälle sind hierbei besonders relevant: 1) Industrie-Applikationen fordern Echtzeitfunkübertragungen mit besonders niedrigen Ausfallraten. 2) Internet-of-things-Anwendungen erfordern die Anbindung einer Vielzahl von verteilten Sensoren. 3) Die Datenraten für Anwendung wie z.B. der Übermittlung von Videoinhalten sind massiv gestiegen. Diese zum Teil gegensätzlichen Erwartungen veranlassen Forscher und Ingenieure dazu, neue Konzepte und Technologien für zukünftige drahtlose Kommunikationssysteme in Betracht zu ziehen. Ziel ist es, aus einer Vielzahl neuer Ideen vielversprechende Kandidatentechnologien zu identifizieren und zu entscheiden, welche für die Umsetzung in zukünftige Produkte geeignet sind. Die Herausforderungen, diese Anforderungen zu erreichen, liegen jedoch jenseits der Möglichkeiten, die eine einzelne Verarbeitungsschicht in einem drahtlosen Netzwerk bieten kann. Daher müssen mehrere Forschungsbereiche Forschungsideen gemeinsam nutzen. Diese Arbeit beschreibt daher eine Plattform als Basis für zukünftige experimentelle Erforschung von drahtlosen Netzwerken unter reellen Bedingungen. Es werden folgende drei Aspekte näher vorgestellt: Zunächst erfolgt ein Überblick über moderne Prototypen und Testbed-Lösungen, die auf großes Interesse, Nachfrage, aber auch Förderungsmöglichkeiten stoßen. Allerdings ist der Entwicklungsaufwand nicht unerheblich und richtet sich stark nach den gewählten Eigenschaften der Plattform. Der Auswahlprozess ist jedoch aufgrund der Menge der verfügbaren Optionen und ihrer jeweiligen (versteckten) Implikationen komplex. Daher wird ein Leitfaden anhand verschiedener Beispiele vorgestellt, mit dem Ziel Erwartungen im Vergleich zu den für den Prototyp erforderlichen Aufwänden zu bewerten. Zweitens wird ein flexibler, aber echtzeitfähiger Signalprozessor eingeführt, der auf einer software-programmierbaren Funkplattform läuft. Der Prozessor ermöglicht die Rekonfiguration wichtiger Parameter der physikalischen Schicht während der Laufzeit, um eine Vielzahl moderner Wellenformen zu erzeugen. Es werden vier Parametereinstellungen 'LLC', 'WiFi', 'eMBB' und 'IoT' vorgestellt, um die Anforderungen der verschiedenen drahtlosen Anwendungen widerzuspiegeln. Diese werden dann zur Evaluierung der die in dieser Arbeit vorgestellte Implementierung herangezogen. Drittens wird durch die Einführung einer generischen Testinfrastruktur die Einbeziehung externer Partner aus der Ferne ermöglicht. Das Testfeld kann hier für verschiedenste Experimente flexibel auf die Anforderungen drahtloser Technologien zugeschnitten werden. Mit Hilfe der Testinfrastruktur wird die Leistung des vorgestellten Transceivers hinsichtlich Latenz, erreichbarem Durchsatz und Paketfehlerraten bewertet. Die öffentliche Demonstration eines taktilen Internet-Prototypen, unter Verwendung von Roboterarmen in einer Mehrbenutzerumgebung, konnte erfolgreich durchgeführt und bei mehreren Gelegenheiten präsentiert werden.:List of figures List of tables Abbreviations Notations 1 Introduction 1.1 Wireless applications 1.2 Motivation 1.3 Software-Defined Radio 1.4 State of the art 1.5 Testbed 1.6 Summary 2 Background 2.1 System Model 2.2 PHY Layer Structure 2.3 Generalized Frequency Division Multiplexing 2.4 Wireless Standards 2.4.1 IEEE 802.15.4 2.4.2 802.11 WLAN 2.4.3 LTE 2.4.4 Low Latency Industrial Wireless Communications 2.4.5 Summary 3 Wireless Prototyping 3.1 Testbed Examples 3.1.1 PHY - focused Testbeds 3.1.2 MAC - focused Testbeds 3.1.3 Network - focused testbeds 3.1.4 Generic testbeds 3.2 Considerations 3.3 Use cases and Scenarios 3.4 Requirements 3.5 Methodology 3.6 Hardware Platform 3.6.1 Host 3.6.2 FPGA 3.6.3 Hybrid 3.6.4 ASIC 3.7 Software Platform 3.7.1 Testbed Management Frameworks 3.7.2 Development Frameworks 3.7.3 Software Implementations 3.8 Deployment 3.9 Discussion 3.10 Conclusion 4 Flexible Transceiver 4.1 Signal Processing Modules 4.1.1 MAC interface 4.1.2 Encoding and Mapping 4.1.3 Modem 4.1.4 Post modem processing 4.1.5 Synchronization 4.1.6 Channel Estimation and Equalization 4.1.7 Demapping 4.1.8 Flexible Configuration 4.2 Analysis 4.2.1 Numerical Precision 4.2.2 Spectral analysis 4.2.3 Latency 4.2.4 Resource Consumption 4.3 Discussion 4.3.1 Extension to MIMO 4.4 Summary 5 Testbed 5.1 Infrastructure 5.2 Automation 5.3 Software Defined Radio Platform 5.4 Radio Frequency Front-end 5.4.1 Sub 6 GHz front-end 5.4.2 26 GHz mmWave front-end 5.5 Performance evaluation 5.6 Summary 6 Experiments 6.1 Single Link 6.1.1 Infrastructure 6.1.2 Single Link Experiments 6.1.3 End-to-End 6.2 Multi-User 6.3 26 GHz mmWave experimentation 6.4 Summary 7 Key lessons 7.1 Limitations Experienced During Development 7.2 Prototyping Future 7.3 Open points 7.4 Workflow 7.5 Summary 8 Conclusions 8.1 Future Work 8.1.1 Prototyping Workflow 8.1.2 Flexible Transceiver Core 8.1.3 Experimental Data-sets 8.1.4 Evolved Access Point Prototype For Industrial Networks 8.1.5 Testbed Standardization A Additional Resources A.1 Fourier Transform Blocks A.2 Resource Consumption A.3 Channel Sounding using Chirp sequences A.3.1 SNR Estimation A.3.2 Channel Estimation A.4 Hardware part listThe demand to achieve higher data rates for the Enhanced Mobile Broadband scenario and novel fifth generation use cases like Ultra-Reliable Low-Latency and Massive Machine-type Communications drive researchers and engineers to consider new concepts and technologies for future wireless communication systems. The goal is to identify promising candidate technologies among a vast number of new ideas and to decide, which are suitable for implementation in future products. However, the challenges to achieve those demands are beyond the capabilities a single processing layer in a wireless network can offer. Therefore, several research domains have to collaboratively exploit research ideas. This thesis presents a platform to provide a base for future applied research on wireless networks. Firstly, by giving an overview of state-of-the-art prototypes and testbed solutions. Secondly by introducing a flexible, yet real-time physical layer signal processor running on a software defined radio platform. The processor enables reconfiguring important parameters of the physical layer during run-time in order to create a multitude of modern waveforms. Thirdly, by introducing a generic test infrastructure, which can be tailored to prototype diverse wireless technology and which is remotely accessible in order to invite new ideas by third parties. Using the test infrastructure, the performance of the flexible transceiver is evaluated regarding latency, achievable throughput and packet error rates.:List of figures List of tables Abbreviations Notations 1 Introduction 1.1 Wireless applications 1.2 Motivation 1.3 Software-Defined Radio 1.4 State of the art 1.5 Testbed 1.6 Summary 2 Background 2.1 System Model 2.2 PHY Layer Structure 2.3 Generalized Frequency Division Multiplexing 2.4 Wireless Standards 2.4.1 IEEE 802.15.4 2.4.2 802.11 WLAN 2.4.3 LTE 2.4.4 Low Latency Industrial Wireless Communications 2.4.5 Summary 3 Wireless Prototyping 3.1 Testbed Examples 3.1.1 PHY - focused Testbeds 3.1.2 MAC - focused Testbeds 3.1.3 Network - focused testbeds 3.1.4 Generic testbeds 3.2 Considerations 3.3 Use cases and Scenarios 3.4 Requirements 3.5 Methodology 3.6 Hardware Platform 3.6.1 Host 3.6.2 FPGA 3.6.3 Hybrid 3.6.4 ASIC 3.7 Software Platform 3.7.1 Testbed Management Frameworks 3.7.2 Development Frameworks 3.7.3 Software Implementations 3.8 Deployment 3.9 Discussion 3.10 Conclusion 4 Flexible Transceiver 4.1 Signal Processing Modules 4.1.1 MAC interface 4.1.2 Encoding and Mapping 4.1.3 Modem 4.1.4 Post modem processing 4.1.5 Synchronization 4.1.6 Channel Estimation and Equalization 4.1.7 Demapping 4.1.8 Flexible Configuration 4.2 Analysis 4.2.1 Numerical Precision 4.2.2 Spectral analysis 4.2.3 Latency 4.2.4 Resource Consumption 4.3 Discussion 4.3.1 Extension to MIMO 4.4 Summary 5 Testbed 5.1 Infrastructure 5.2 Automation 5.3 Software Defined Radio Platform 5.4 Radio Frequency Front-end 5.4.1 Sub 6 GHz front-end 5.4.2 26 GHz mmWave front-end 5.5 Performance evaluation 5.6 Summary 6 Experiments 6.1 Single Link 6.1.1 Infrastructure 6.1.2 Single Link Experiments 6.1.3 End-to-End 6.2 Multi-User 6.3 26 GHz mmWave experimentation 6.4 Summary 7 Key lessons 7.1 Limitations Experienced During Development 7.2 Prototyping Future 7.3 Open points 7.4 Workflow 7.5 Summary 8 Conclusions 8.1 Future Work 8.1.1 Prototyping Workflow 8.1.2 Flexible Transceiver Core 8.1.3 Experimental Data-sets 8.1.4 Evolved Access Point Prototype For Industrial Networks 8.1.5 Testbed Standardization A Additional Resources A.1 Fourier Transform Blocks A.2 Resource Consumption A.3 Channel Sounding using Chirp sequences A.3.1 SNR Estimation A.3.2 Channel Estimation A.4 Hardware part lis

    Design of High Speed Memory-Based FFT Processor Using 90nm Technology

    Get PDF
    In order to enhance performance, the Fast Fourier Transformation is a important operation in Digital Signal Processing (DSP) systems had been extensively studied. State-of-the-art transmission technology uses Orthogonal frequency division multiplexing (OFDM), which primary operation is the Fast fourier transform (FFT). This analysis presents the design of a high-speed memory-based FFT processor using 90nm technology. The novel hybrid multiplier and hybrid adder is used in this analysis. The main objective of this method is to develop an efficient, memory-efficient FFT processor that requires less area.  Using 90nm CMOS (Complementary Metal Oxide Semiconductor) technology, the proposed FFT processor was created and implemented in process. With reduced processing time, this means that the proposed FFT processor performs better than the prior memory-based FFT processors in terms of performance and the number of LUTs required which reduces area and memory utilization

    Field Programmable Gate Arrays (FPGAs) II

    Get PDF
    This Edited Volume Field Programmable Gate Arrays (FPGAs) II is a collection of reviewed and relevant research chapters, offering a comprehensive overview of recent developments in the field of Computer and Information Science. The book comprises single chapters authored by various researchers and edited by an expert active in the Computer and Information Science research area. All chapters are complete in itself but united under a common research study topic. This publication aims at providing a thorough overview of the latest research efforts by international authors on Computer and Information Science, and open new possible research paths for further novel developments

    Realizing In-Memory Baseband Processing for Ultra-Fast and Energy-Efficient 6G

    Full text link
    To support emerging applications ranging from holographic communications to extended reality, next-generation mobile wireless communication systems require ultra-fast and energy-efficient baseband processors. Traditional complementary metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges in transistor scaling and the von Neumann bottleneck. To address these challenges, in-memory computing-based baseband processors using resistive random-access memory (RRAM) present an attractive solution. In this paper, we propose and demonstrate RRAM-implemented in-memory baseband processing for the widely adopted multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key operations, including discrete Fourier transform (DFT) and MIMO detection using linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step. In addition, RRAM-based channel estimation module is proposed and discussed. By prototyping and simulations, we demonstrate the feasibility of RRAM-based full-fledged communication system in hardware, and reveal it can outperform state-of-the-art baseband processors with a gain of 91.2×\times in latency and 671×\times in energy efficiency by large-scale simulations. Our results pave a potential pathway for RRAM-based in-memory computing to be implemented in the era of the sixth generation (6G) mobile communications.Comment: arXiv admin note: text overlap with arXiv:2205.0356

    The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review

    Get PDF
    Network latency will be a critical performance metric for the Fifth Generation (5G) networks expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion, especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability and flexibility compared to prior existing deployed technologies. The scalability dimension caters for meeting rapid demand as new applications evolve. While flexibility complements the scalability dimension by investigating novel non-stacked protocol architecture. The goal of this review paper is to deploy ultra-low latency reduction framework for 5G communications considering flexibility and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new technologies of software defined network (SDN), network function virtualization (NFV) and fog networking. This review paper will contribute significantly towards the future implementation of flexible and high capacity ultra-low latency 5G communications

    Baseband Processing for 5G and Beyond: Algorithms, VLSI Architectures, and Co-design

    Get PDF
    In recent years the number of connected devices and the demand for high data-rates have been significantly increased. This enormous growth is more pronounced by the introduction of the Internet of things (IoT) in which several devices are interconnected to exchange data for various applications like smart homes and smart cities. Moreover, new applications such as eHealth, autonomous vehicles, and connected ambulances set new demands on the reliability, latency, and data-rate of wireless communication systems, pushing forward technology developments. Massive multiple-input multiple-output (MIMO) is a technology, which is employed in the 5G standard, offering the benefits to fulfill these requirements. In massive MIMO systems, base station (BS) is equipped with a very large number of antennas, serving several users equipments (UEs) simultaneously in the same time and frequency resource. The high spatial multiplexing in massive MIMO systems, improves the data rate, energy and spectral efficiencies as well as the link reliability of wireless communication systems. The link reliability can be further improved by employing channel coding technique. Spatially coupled serially concatenated codes (SC-SCCs) are promising channel coding schemes, which can meet the high-reliability demands of wireless communication systems beyond 5G (B5G). Given the close-to-capacity error correction performance and the potential to implement a high-throughput decoder, this class of code can be a good candidate for wireless systems B5G. In order to achieve the above-mentioned advantages, sophisticated algorithms are required, which impose challenges on the baseband signal processing. In case of massive MIMO systems, the processing is much more computationally intensive and the size of required memory to store channel data is increased significantly compared to conventional MIMO systems, which are due to the large size of the channel state information (CSI) matrix. In addition to the high computational complexity, meeting latency requirements is also crucial. Similarly, the decoding-performance gain of SC-SCCs also do come at the expense of increased implementation complexity. Moreover, selecting the proper choice of design parameters, decoding algorithm, and architecture will be challenging, since spatial coupling provides new degrees of freedom in code design, and therefore the design space becomes huge. The focus of this thesis is to perform co-optimization in different design levels to address the aforementioned challenges/requirements. To this end, we employ system-level characteristics to develop efficient algorithms and architectures for the following functional blocks of digital baseband processing. First, we present a fast Fourier transform (FFT), an inverse FFT (IFFT), and corresponding reordering scheme, which can significantly reduce the latency of orthogonal frequency-division multiplexing (OFDM) demodulation and modulation as well as the size of reordering memory. The corresponding VLSI architectures along with the application specific integrated circuit (ASIC) implementation results in a 28 nm CMOS technology are introduced. In case of a 2048-point FFT/IFFT, the proposed design leads to 42% reduction in the latency and size of reordering memory. Second, we propose a low-complexity massive MIMO detection scheme. The key idea is to exploit channel sparsity to reduce the size of CSI matrix and eventually perform linear detection followed by a non-linear post-processing in angular domain using the compressed CSI matrix. The VLSI architecture for a massive MIMO with 128 BS antennas and 16 UEs along with the synthesis results in a 28 nm technology are presented. As a result, the proposed scheme reduces the complexity and required memory by 35%–73% compared to traditional detectors while it has better detection performance. Finally, we perform a comprehensive design space exploration for the SC-SCCs to investigate the effect of different design parameters on decoding performance, latency, complexity, and hardware cost. Then, we develop different decoding algorithms for the SC-SCCs and discuss the associated decoding performance and complexity. Also, several high-level VLSI architectures along with the corresponding synthesis results in a 12 nm process are presented, and various design tradeoffs are provided for these decoding schemes
    corecore