4,729 research outputs found

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Roadmap on semiconductor-cell biointerfaces.

    Get PDF
    This roadmap outlines the role semiconductor-based materials play in understanding the complex biophysical dynamics at multiple length scales, as well as the design and implementation of next-generation electronic, optoelectronic, and mechanical devices for biointerfaces. The roadmap emphasizes the advantages of semiconductor building blocks in interfacing, monitoring, and manipulating the activity of biological components, and discusses the possibility of using active semiconductor-cell interfaces for discovering new signaling processes in the biological world

    A framework for fine-grain synthesis optimization of operational amplifiers

    Get PDF
    This thesis presents a cell-level framework for Operational Amplifiers Synthesis (OASYN) coupling both circuit design and layout. For circuit design, the tool applies a corner-driven optimization, accounting for on-chip performance variations. By exploring the process, voltage, and temperature variations space, the tool extracts design worst case solution. The tool undergoes sensitivity analysis along with Pareto-optimality to achieve required specifications. For layout phase, OASYN generates a DRC proved automated layout based on a sized circuit-level description. Morata et al. (1996) introduced an elegant representation of block placement called sequence pair for general floorplans (SP). Like TCG and BSG, but unlike O-tree, B*tree, and CBL, SP is P-admissible. Unlike SP, TCG supports incremental update during operation and keeps the information of the boundary modules as well as their relative positions in the representation. Block placement algorithms that are based on SP use heuristic optimization algorithms, e.g., simulated annealing where generation of large number of sequence pairs are required. Therefore a fast algorithm is needed to generate sequence pairs after each solution perturbation. The thesis presents a new simple and efficient O(n) runtime algorithm for fast realization of incremental update for cost evaluation. The algorithm integrates sequence pair and transitive closure graph advantages into TCG-S* a superior topology update scheme which facilitates the search for optimum desired floorplan. Experiments show that TCG-S* is better than existing works in terms of area utilization and convergence speed. Routing-aware placement is implemented in OASYN, handling symmetry constraints, e.g., interdigitization, common centroid, along with congestion elimination and the enhancement of placement routability

    Ultra-low power logic in memory with commercial grade memristors and FPGA-based smart-IMPLY architecture

    Get PDF
    Reducing power consumption in nowadays computer technologies represents an increasingly difficult challenge. Conventional computing architectures suffer from the so-called von Neumann bottleneck (VNB), which consists in the continuous need to exchange data and instructions between the memory and the processing unit, leading to significant and apparently unavoidable power consumption. Even the hardware typically employed to run Artificial Intelligence (AI) algorithms, such as Deep Neural Networks (DNN), suffers from this limitation. A change of paradigm is so needed to comply with the ever-increasing demand for ultra-low power, autonomous, and intelligent systems. From this perspective, emerging memristive non-volatile memories are considered a good candidate to lead this technological transition toward the next-generation hardware platforms, enabling the possibility to store and process information in the same place, therefore bypassing the VNB. To evaluate the state of current public-available devices, in this work commercial-grade packaged Self Directed Channel memristors are thoroughly studied to evaluate their performance in the framework of in-memory computing. Specifically, the operating conditions allowing both analog update of the synaptic weight and stable binary switching are identified, along with the associated issues. To this purpose, a dedicated yet prototypical system based on an FPGA control platform is designed and realized. Then, it is exploited to fully characterize the performance in terms of power consumption of an innovative Smart IMPLY (SIMPLY) Logic-in-Memory (LiM) computing framework that allows reliable in-memory computation of classical Boolean operations. The projection of these results to the nanoseconds regime leads to an estimation of the real potential of this computing paradigm. Although not investigated in this work, the presented platform can also be exploited to test memristor-based SNN and Binarized DNNs (i.e., BNN), that can be combined with LiM to provide the heterogeneous flexible architecture envisioned as the long-term goal for ubiquitous and pervasive AI

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing
    corecore