10,350 research outputs found

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Circuit designs for low-power and SEU-hardened systems

    Get PDF
    The desire to have smaller and faster portable devices is one of the primary motivations for technology scaling. Though advancements in device physics are moving at a very good pace, they might not be aggressive enough for now-a-day technology scaling trends. As a result, the MOS devices used for present day integrated circuits are pushed to the limit in terms of performance, power consumption and robustness, which are the most critical criteria for almost all applications. Secondly, technology advancements have led to design of complex chips with increasing chip densities and higher operating speeds. The design of such high performance complex chips (microprocessors, digital signal processors, etc) has massively increased the power dissipation and, as a result, the operating temperatures of these integrated circuits. In addition, due to the aggressive technology scaling the heat withstanding capabilities of the circuits is reducing, thereby increasing the cost of packaging and heat sink units. This led to the increase in prominence for smarter and more robust low-power circuit and system designs. Apart from power consumption, another criterion affected by technology scaling is robustness of the design, particularly for critical applications (security, medical, finance, etc). Thus, the need for error free or error immune designs. Until recently, radiation effects were a major concern in space applications only. With technology scaling reaching nanometer level, terrestrial radiation has become a growing concern. As a result Single Event Upsets (SEUs) have become a major challenge to robust designs. Single event upset is a temporary change in the state of a device due to a particle strike (usually from the radiation belts or from cosmic rays) which may manifest as an error at the output. This thesis proposes a novel method for adaptive digital designs to efficiently work with the lowest possible power consumption. This new technique improves options in performance, robustness and power. The thesis also proposes a new dual data rate flipflop, which reduces the necessary clock speed by half, drastically reducing the power consumption. This new dual data rate flip-flop design culminates in a proposed unique radiation hardened dual data rate flip-flop, Firebird\u27. Firebird offers a valuable addition to the future circuit designs, especially with the increasing importance of the Single Event Upsets (SEUs) and power dissipation with technology scaling.\u2

    STUDY OF RADIATION EFFECTS IN GAN-BASED DEVICES

    Get PDF
    Radiation tolerance of wide-bandgap Gallium Nitride (GaN) high-electron-mobility transistors (HEMT) has been studied, including X-ray-induced TID effects, heavy-ion-induced single event effects, and neutron-induced single event effects. Threshold voltage shift is observed in X-ray irradiation experiments, which recovers over time, indicating no permanent damage formed inside the device. Heavy-ion radiation effects in GaN HEMTs have been studied as a function of bias voltage, ion LET, radiation flux, and total fluence. A statistically significant amount of heavy-ion-induced gate dielectric degradation was observed, which consisted of hard breakdown and soft breakdown. Specific critical injection level experiments were designed and carried out to explore the gate dielectric degradation mechanism further. Transient device simulations determined ion-induced peak transient electric field and duration for a variety of ion LET, ion injection locations, and applied drain voltages. Results demonstrate that the peak transient electric fields exceed the breakdown strength of the gate dielectric, leading to dielectric defect generation and breakdown. GaN power device lifetime degradation caused by neutron irradiation is reported. Hundreds of devices were stressed in the off-state with various drain voltages from 75 V to 400 V while irradiated with a high-intensity neutron beam. Observing a statistically significant number of neutron-induced destructive single-event-effects (DSEEs) enabled an accurate extrapolation of terrestrial field failure rates. Nuclear event and electronic simulations were performed to model the effect of terrestrial neutron secondary ion-induced gate dielectric breakdown. Combined with the TCAD simulation results, we believe that heavy-ion-induced SEGR and neutron-induced SEGR share common physics mechanisms behind the failures. Overall, experimental data and simulation results provide evidence supporting the idea that both radiation-induced SBD and HBD are associated with defect-related conduction paths formed across the dielectric, in response to radiation-induced charge injection. A percolation theory-based dielectric degradation model is proposed, which explains the dielectric breakdown behaviors observed in heavy-ion irradiation experiments

    Retention and application of Skylab experiences to future programs

    Get PDF
    The problems encountered and special techniques and procedures developed on the Skylab program are described along with the experiences and practical benefits obtained for dissemination and use on future programs. Three major topics are discussed: electrical problems, mechanical problems, and special techniques. Special techniques and procedures are identified that were either developed or refined during the Skylab program. These techniques and procedures came from all manufacturing and test phases of the Skylab program and include both flight and GSE items from component level to sophisticated spaceflight systems

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing

    Can Carbon Sinks be Operational? An RFF Workshop Summary

    Get PDF
    An RFF Workshop brought together experts from around the world to assess the feasibility of using biological sinks to sequester carbon as part of a global atmospheric mitigation effort. The chapters of this proceeding are a result of that effort. Although the intent of the workshop was not to generate a consensus, a number of studies suggest that sinks could be a relatively inexpensive and effective carbon management tool. The chapters cover a variety of aspects and topics related to the monitoring and measurement of carbon in biological systems. They tend to support the view the carbon sequestration using biological systems is technically feasible with relatively good precision and at relatively low cost. Thus carbon sinks can be operational.carbon, sinks, global warming, sequestration, forests

    X-ray reverberation around accreting black holes

    Full text link
    Luminous accreting stellar mass and supermassive black holes produce power-law continuum X-ray emission from a compact central corona. Reverberation time lags occur due to light travel time-delays between changes in the direct coronal emission and corresponding variations in its reflection from the accretion flow. Reverberation is detectable using light curves made in different X-ray energy bands, since the direct and reflected components have different spectral shapes. Larger, lower frequency, lags are also seen and are identified with propagation of fluctuations through the accretion flow and associated corona. We review the evidence for X-ray reverberation in active galactic nuclei and black hole X-ray binaries, showing how it can be best measured and how it may be modelled. The timescales and energy-dependence of the high frequency reverberation lags show that much of the signal is originating from very close to the black hole in some objects, within a few gravitational radii of the event horizon. We consider how these signals can be studied in the future to carry out X-ray reverberation mapping of the regions closest to black holes.Comment: 72 pages, 32 figures. Accepted for publication in The Astronomy and Astrophysics Review. Corrected for mostly minor typos, but in particular errors are corrected in the denominators of the covariance and rms spectrum error equations (Eqn. 14 and 15
    • …
    corecore