184 research outputs found

    Novel Multicarrier Memory Channel Architecture Using Microwave Interconnects: Alleviating the Memory Wall

    Get PDF
    abstract: The increase in computing power has simultaneously increased the demand for input/output (I/O) bandwidth. Unfortunately, the speed of I/O and memory interconnects have not kept pace. Thus, processor-based systems are I/O and interconnect limited. The memory aggregated bandwidth is not scaling fast enough to keep up with increasing bandwidth demands. The term "memory wall" has been coined to describe this phenomenon. A new memory bus concept that has the potential to push double data rate (DDR) memory speed to 30 Gbit/s is presented. We propose to map the conventional DDR bus to a microwave link using a multicarrier frequency division multiplexing scheme. The memory bus is formed using a microwave signal carried within a waveguide. We call this approach multicarrier memory channel architecture (MCMCA). In MCMCA, each memory signal is modulated onto an RF carrier using 64-QAM format or higher. The carriers are then routed using substrate integrated waveguide (SIW) interconnects. At the receiver, the memory signals are demodulated and then delivered to SDRAM devices. We pioneered the usage of SIW as memory channel interconnects and demonstrated that it alleviates the memory bandwidth bottleneck. We demonstrated SIW performance superiority over conventional transmission line in immunity to cross-talk and electromagnetic interference. We developed a methodology based on design of experiment (DOE) and response surface method techniques that optimizes the design of SIW interconnects and minimizes its performance fluctuations under material and manufacturing variations. Along with using SIW, we implemented a multicarrier architecture which enabled the aggregated DDR bandwidth to reach 30 Gbit/s. We developed an end-to-end system model in Simulink and demonstrated the MCMCA performance for ultra-high throughput memory channel. Experimental characterization of the new channel shows that by using judicious frequency division multiplexing, as few as one SIW interconnect is sufficient to transmit the 64 DDR bits. Overall aggregated bus data rate achieves 240 GBytes/s data transfer with EVM not exceeding 2.26% and phase error of 1.07 degree or less.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic

    Get PDF
    In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties. In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information. The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits. The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Performance assessment of multi-walled carbon nanotube interconnects using advanced polynomial chaos schemes

    Get PDF
    2019 Spring.Includes bibliographical references.With the continuous miniaturization in the latest VLSI technologies, manufacturing uncertainties at nanoscale processes and operations are unpredictable at the chip level, packaging level and at board levels of integrated systems. To overcome such issues, simulation solvers to model forward propagation of uncertainties or variations in random processes at the device level to the network response are required. Polynomial Chaos Expansion (PCE) of the random variables is the most common technique to model the unpredictability in the systems. Existing methods for uncertainty quantification have a major drawback that as the number of random variables in a system increases, its computational cost and time increases in a polynomial fashion. In order to alleviate the poor scalability of standard PC approaches, predictor-corrector polynomial chaos scheme and hyperbolic polynomial chaos expansion (HPCE) scheme are being proposed in this dissertation. In the predictor-corrector polynomial scheme, low-fidelity meta-model is generated using Equivalent Single Conductor (ESC) approximation model and then its accuracy is enhanced using low order multi-conductor circuit (MCC) model called a corrector model. In HPCE, sparser polynomial expansion is generated based on the hyperbolic criterion. These schemes result in an immense reduction in CPU cost and speed. This dissertation presents the novel approach to quantify the uncertainties in multi-walled carbon nano-tubes using these schemes. The accuracy and validation of these schemes are shown using various numerical examples

    Fusion enhancement for tracking of respiratory rate through intrinsic mode functions in photoplethysmography

    Get PDF
    Decline in respiratory regulation demonstrates the primary forewarning for the onset of physiological aberrations. In clinical environment, the obtrusive nature and cost of instrumentation have retarded the integration of continuous respiration monitoring for standard practice. Photoplethysmography (PPG) presents a non-invasive, optical method of assessing blood flow dynamics in peripheral vasculature. Incidentally, respiration couples as a surrogate constituent in PPG signal, justifying respiratory rate (RR) estimation. The physiological processes of respiration emerge as distinctive oscillations that are fluctuations in various parameters extracted from PPG signal. We propose a novel algorithm designed to account for intermittent diminishment of the respiration induced variabilities (RIV) by a fusion-based enhancement of wavelet synchrosqueezed spectra. We have combined the information on intrinsic mode functions (IMF) of five RIVs to enhance mutually occurring, instantaneous frequencies of the spectra. The respiration rate estimate is obtained by tracking the spectral ridges with a particle filter. We have evaluated the method with a dataset recorded from 29 young adult subjects (mean: 24.17 y, SD: 4.19 y) containing diverse, voluntary, and periodically metronome-assisted respiratory patterns. Bayesian inference on fusion-enhanced Respiration Induced Frequency Variability (RIFV) indicated MAE and RMSE of 1.764 and 3.996 BPM, respectively. The fusion approach was deemed to improve MAE and RMSE of RIFV by 0.185 BPM (95% HDI: 0.0285-0.3488, effect size: 0.548) and 0.250 BPM (95% HDI: 0.0733-0.431, effect size: 0.653), respectively, with further pronounced improvements to other RIVs. We conclude that the fusion of variability signals proves important to IMF localization in the spectral estimation of RR.acceptedVersionPeer reviewe

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    Stability of microgrids and weak grids with high penetration of variable renewable energy

    Get PDF
    Autonomous microgrids and weak grids with high penetrations of variable renewable energy (VRE) generation tend to share several common characteristics: i) low synchronous inertia, ii) sensitivity to active power imbalances, and iii) low system strength (as defined by the nodal short circuit ratio). As a result of these characteristics, there is a greater risk of system instability relative to larger grids, especially as the share of VRE is increased. This thesis focuses on the development of techniques and strategies to assess and improve the stability of microgrids and weak grids. In the first part of this thesis, the small-signal stability of inertia-less converter dominated microgrids is analysed, wherein a load flow based method for small-signal model initialisation is proposed and used to examine the effects of topology and network parameters on the stability of the microgrid. The use of a back-to-back dc link to interconnect neighbouring microgrids and provide dynamic frequency support is then proposed to improve frequency stability by helping to alleviate active power imbalances. In the third part of this thesis, a new technique to determine the optimal sizing of smoothing batteries in microgrids is proposed. The technique is based on the temporal variability of the solar irradiance at the specific site location in order to maximise PV penetration without causing grid instability. A technical framework for integrating solar PV plants into weak grids is then proposed, addressing the weaknesses in conventional Grid Codes that fail to consider the unique characteristics of weak grids. Finally, a new technique is proposed for estimating system load relief factors that are used in aggregate single frequency stability models

    Damage Initiation and Evolution in Voided and Unvoided Lead Free Solder Joints Under Cyclic Thermo-Mechanical Loading

    Get PDF
    The effect of process-induced voids on the durability of Sn-Pb and Pb-free solder interconnects in electronic products is not clearly understood and researchers have reported conflicting findings. Studies have shown that depending on the size and location, voids are not always detrimental to reliability, and in fact, may sometimes even increase the durability of joints. This debate is more intensified in Pb-free solders; since voids are more common in Pb-free joints. Results of experimental studies are presented in this study to empirically explore the influence of voids on the durability of Ball Grid Array (BGA) Pb-free solder joints. In order to quantify the detailed influence of size, location, and volume fraction of voids, extensive modeling is conducted, using a continuum damage model (Energy Partitioning model), rather than the existing approaches, such as fracture mechanics, reported in the literature. The E-P approach is modified in this study by use of a successive initiation method, since depending on their location and size; voids may influence either the time to initiate cyclic fatigue damage or time to propagate fatigue damage, or both. Modeling results show competing interactions between void size and location, that results in a non-monotonic relationship between void size and durability. It also suggests that voids in general are not detrimental to reliability except when a large portion of the damage propagation path is covered with either a large void or with many small voids. In addition, this dissertation also addresses several fundamental issues in solder fatigue damage modeling. One objective is to use experimental data to identify the correct fatigue constants to be used when explicitly modeling fatigue damage propagation in Pb-free solders. Explicit modeling of damage propagation improves modeling accuracy across solder joints of vastly different architectures, since the joint geometry may have a strong influence on the ratio of initiation-life to propagation-life. Most solder fatigue models in the literature do not provide this capability since they predict failure based only on the damage accumulation rates during the first few cycles in the undamaged joint. Another objective is to incorporate into cyclic damage propagation models, the effect of material softening caused by cyclic micro-structural damage accumulation in Pb-free solder materials. In other words the model constants of the solder viscoplastic constitutive model are continuously updated with the help of experimental data, to include this cyclic softening effect as damage accumulates during the damage-propagation phase. The ability to model this damage evolution process increases the accuracy of durability predictions, and is not available in most current solder fatigue models reported in the literature. This mechanism-based microstructural damage evolution model, called the Energy Partitioning Damage Evolution (EPDE) model is developed and implemented in Finite Element Analysis of solder joints with the successive initiation technique and the results are provided here. Experimental results are used as guidance to calibrate the Energy Partitioning fatigue model constants, for use in successive initiation modeling with and without damage evolution. FEA results show 15% difference between the life predicted by averaging technique and successive initiation. This difference could significantly increase in the case of long joints such as thermal pads or die-attach, hence validating the use of successive initiation in these cases. The difference between using successive initiation with and without damage evolution is about 10%. Considering the small amount of effort that has to be made to update the constitutive properties for progressive degradation, it is recommended that softening be included whenever damage propagation needs to be explicitly modeled. However the damage evolution exponents and the corresponding E-P model constants obtained in this study, using successive initiation with damage evolution, are partially dependent on the specimen geometry. Hence, these constants may have to be re-calibrated for other geometries

    Bridging Flows: Microfluidic End‐User Solutions

    Get PDF

    Integrated out-of-hours care arrangements in England: observational study of progress towards single call access via NHS Direct and impact on the wider health system

    Get PDF
    Objectives: To assess the extent of service integration achieved within general practice cooperatives and NHS Direct sites participating in the Department of Health’s national “Exemplar Programme” for single call access to out-of-hours care via NHS Direct. To assess the impact of integrated out-of-hours care arrangements upon general practice cooperatives and the wider health system (use of emergency departments, 999 ambulance services, and minor injuries units). Design: Observational before and after study of demand, activity, and trends in the use of other health services. Setting: Thirty four English general practice cooperatives with NHS Direct partners (“exemplars”) of which four acted as “case exemplars”. Also 10 control cooperatives for comparison. Main Outcome Measures: Extent of integration achieved (defined as the proportion of hours and the proportion of general practice patients covered by integrated arrangements), patterns of general practice cooperative demand and activity and trends in use of the wider health system in the first year. Results: Of 31 distinct exemplars 21 (68%) integrated all out-of-hours call management by March 2004. Nine (29%) established single call access for all patients. In the only case exemplar where direct comparison was possible, cooperative nurse telephone triage before integration completed a higher proportion of calls with telephone advice than did NHS Direct afterwards (39% v 30%; p<0.0001). The proportion of calls completed by NHS Direct telephone advice at other sites was lower. There is evidence for transfer of demand from case exemplars to 999 ambulance services. A downturn in overall demand for care seen in two case exemplars was also seen in control sites. Conclusion: The new model of out-of-hours care was implemented in a variety of settings across England by new partnerships between general practice cooperatives and NHS Direct. Single call access was not widely implemented and most patients needed to make at least two telephone calls to contact the service. In the first year, integration may have produced some reduction in total demand, but this may have been accompanied by shifts from one part of the local health system to another. NHS Direct demonstrated capability in handling calls but may not currently have sufficient capacity to support national implementation
    • 

    corecore