104,808 research outputs found

    Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

    Get PDF
    Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique

    Enhancement of Recombinant Protein Production in Transgenic Nicotiana benthamiana Plant Cell Suspension Cultures with Co-Cultivation of Agrobacterium Containing Silencing Suppressors.

    Get PDF
    We have previously demonstrated that the inducible plant viral vector (CMViva) in transgenic plant cell cultures can significantly improve the productivity of extracellular functional recombinant human alpha-1-antiryspin (rAAT) compared with either a common plant constitutive promoter (Cauliflower mosaic virus (CaMV) 35S) or a chemically inducible promoter (estrogen receptor-based XVE) system. For a transgenic plant host system, however, viral or transgene-induced post-transcriptional gene silencing (PTGS) has been identified as a host response mechanism that may dramatically reduce the expression of a foreign gene. Previous studies have suggested that viral gene silencing suppressors encoded by a virus can block or interfere with the pathways of transgene-induced PTGS in plant cells. In this study, the capability of nine different viral gene silencing suppressors were evaluated for improving the production of rAAT protein in transgenic plant cell cultures (CMViva, XVE or 35S system) using an Agrobacterium-mediated transient expression co-cultivation process in which transgenic plant cells and recombinant Agrobacterium carrying the viral gene silencing suppressor were grown together in suspension cultures. Through the co-cultivation process, the impacts of gene silencing suppressors on the rAAT production were elucidated, and promising gene silencing suppressors were identified. Furthermore, the combinations of gene silencing suppressors were optimized using design of experiments methodology. The results have shown that in transgenic CMViva cell cultures, the functional rAAT as a percentage of total soluble protein is increased 5.7 fold with the expression of P19, and 17.2 fold with the co-expression of CP, P19 and P24

    High signal-to-noise ratio observations and the ultimate limits of precision pulsar timing

    Full text link
    We demonstrate that the sensitivity of high-precision pulsar timing experiments will be ultimately limited by the broadband intensity modulation that is intrinsic to the pulsar's stochastic radio signal. That is, as the peak flux of the pulsar approaches that of the system equivalent flux density, neither greater antenna gain nor increased instrumental bandwidth will improve timing precision. These conclusions proceed from an analysis of the covariance matrix used to characterise residual pulse profile fluctuations following the template matching procedure for arrival time estimation. We perform such an analysis on 25 hours of high-precision timing observations of the closest and brightest millisecond pulsar, PSR J0437-4715. In these data, the standard deviation of the post-fit arrival time residuals is approximately four times greater than that predicted by considering the system equivalent flux density, mean pulsar flux and the effective width of the pulsed emission. We develop a technique based on principal component analysis to mitigate the effects of shape variations on arrival time estimation and demonstrate its validity using a number of illustrative simulations. When applied to our observations, the method reduces arrival time residual noise by approximately 20%. We conclude that, owing primarily to the intrinsic variability of the radio emission from PSR J0437-4715 at 20 cm, timing precision in this observing band better than 30 - 40 ns in one hour is highly unlikely, regardless of future improvements in antenna gain or instrumental bandwidth. We describe the intrinsic variability of the pulsar signal as stochastic wideband impulse modulated self-noise (SWIMS) and argue that SWIMS will likely limit the timing precision of every millisecond pulsar currently observed by Pulsar Timing Array projects as larger and more sensitive antennae are built in the coming decades.Comment: 16 pages, 9 figures, accepted for publication in MNRAS. Updated version: added DOI and changed manuscript to reflect changes in the final published versio

    Utilizing Astrometric Orbits to Obtain Coronagraphic Images

    Full text link
    We present an approach for utilizing astrometric orbit information to improve the yield of planetary images and spectra from a follow-on direct detection mission. This approach is based on the notion-strictly hypothetical-that if a particular star could be observed continuously, the instrument would in time observe all portions of the habitable zone so that no planet residing therein could be missed. This strategy could not be implemented in any realistic mission scenario. But if an exoplanet's orbit is known from astrometric observation, then it may be possible to plan and schedule a sequence of imaging observations that is the equivalent of continuous observation. A series of images-optimally spaced in time-could be recorded to examine contiguous segments of the orbit. In time, all segments would be examined, leading to the inevitable detection of the planet. In this paper, we show how astrometric orbit information can be used to construct such a sequence. Using stars from astrometric and imaging target lists, we find that the number of observations in this sequence typically ranges from 2 to 7, representing the maximum number of observations required to find the planet. The probable number of observations ranges from 1.5 to 3.1. This is a dramatic improvement in efficiency over previous methods proposed for utilizing astrometric orbits. We examine how the implementation of this approach is complicated and limited by operational constraints. We find that it can be fully implemented for internal coronagraph and visual nuller missions, with a success rate approaching 100%. External occulter missions will also benefit, but to a lesser degree.Comment: 28 pages, 14 figures, submitted to PAS

    Probing the neutron star interior and the Equation of State of cold dense matter with the SKA

    Get PDF
    With an average density higher than the nuclear density, neutron stars (NS) provide a unique test-ground for nuclear physics, quantum chromodynamics (QCD), and nuclear superfluidity. Determination of the fundamental interactions that govern matter under such extreme conditions is one of the major unsolved problems of modern physics, and -- since it is impossible to replicate these conditions on Earth -- a major scientific motivation for SKA. The most stringent observational constraints come from measurements of NS bulk properties: each model for the microscopic behaviour of matter predicts a specific density-pressure relation (its `Equation of state', EOS). This generates a unique mass-radius relation which predicts a characteristic radius for a large range of masses and a maximum mass above which NS collapse to black holes. It also uniquely predicts other bulk quantities, like maximum spin frequency and moment of inertia. The SKA, in Phase 1 and particularly in Phase 2 will, thanks to the exquisite timing precision enabled by its raw sensitivity, and surveys that dramatically increase the number of sources: 1) Provide many more precise NS mass measurements (high mass NS measurements are particularly important for ruling out EOS models); 2) Allow the measurement of the NS moment of inertia in highly relativistic binaries such as the Double Pulsar; 3) Greatly increase the number of fast-spinning NS, with the potential discovery of spin frequencies above those allowed by some EOS models; 4) Improve our knowledge of new classes of binary pulsars such as black widows and redbacks (which may be massive as a class) through sensitive broad-band radio observations; and 5) Improve our understanding of dense matter superfluidity and the state of matter in the interior through the study of rotational glitches, provided that an ad-hoc campaign is developed.Comment: 22 pages, 8 figures, to be published in: "Advancing Astrophysics with the Square Kilometre Array", Proceedings of Science, PoS(AASKA14)04
    corecore