10 research outputs found

    Quality and Capacity Analysis of Molecular Communications in Bacterial Synthetic Logic Circuits

    Get PDF
    Synthetic logic circuits have been proposed as potential solutions for theranostics of biotechnological problems. One proposed model is the engineering of bacteria cells to create logic gates, and the communication between the bacteria populations will enable the circuit operation. In this paper, we analyse the quality of bacteria-based synthetic logic circuit through molecular communications that represent communication along a bus between three gates. In the bacteria-based synthetic logic circuit, the system receives environmental signals as molecular inputs and will process this information through a cascade of synthetic logic gates and free diffusion channels. We analyse the performance of this circuit by evaluating its quality and its relationship to the channel capacity of the molecular communications links that interconnect the bacteria populations. Our results show the effect of the molecular environmental delay and molecular amplitude differences over both the channel capacity and circuit quality. Furthermore, based on these metrics we also obtain an optimum region for the circuit operation resulting in an accuracy of 80% for specific conditions. These results show that the performance of synthetic biology circuits can be evaluated through molecular communications, and lays the groundwork for combined systems that can contribute to future biomedical and biotechnology applications

    ENHANCEMENT OF MARKOV RANDOM FIELD MECHANISM TO ACHIEVE FAULT-TOLERANCE IN NANOSCALE CIRCUIT DESIGN

    Get PDF
    As the MOSFET dimensions scale down towards nanoscale level, the reliability of circuits based on these devices decreases. Hence, designing reliable systems using these nano-devices is becoming challenging. Therefore, a mechanism has to be devised that can make the nanoscale systems perform reliably using unreliable circuit components. The solution is fault-tolerant circuit design. Markov Random Field (MRF) is an effective approach that achieves fault-tolerance in integrated circuit design. The previous research on this technique suffers from limitations at the design, simulation and implementation levels. As improvements, the MRF fault-tolerance rules have been validated for a practical circuit example. The simulation framework is extended from thermal to a combination of thermal and random telegraph signal (RTS) noise sources to provide a more rigorous noise environment for the simulation of circuits build on nanoscale technologies. Moreover, an architecture-level improvement has been proposed in the design of previous MRF gates. The redesigned MRF is termed as Improved-MRF. The CMOS, MRF and Improved-MRF designs were simulated under application of highly noisy inputs. On the basis of simulations conducted for several test circuits, it is found that Improved-MRF circuits are 400 whereas MRF circuits are only 10 times more noise-tolerant than the CMOS alternatives. The number of transistors, on the other hand increased from a factor of 9 to 15 from MRF to Improved-MRF respectively (as compared to the CMOS). Therefore, in order to provide a trade-off between reliability and the area overhead required for obtaining a fault-tolerant circuit, a novel parameter called as ‘Reliable Area Index’ (RAI) is introduced in this research work. The value of RAI exceeds around 1.3 and 40 times for MRF and Improved-MRF respectively as compared to CMOS design which makes Improved- MRF to be still 30 times more efficient circuit design than MRF in terms of maintaining a suitable trade-off between reliability and area-consumption of the circuit

    ENHANCEMENT OF MARKOV RANDOM FIELD MECHANISM TO ACHIEVE FAULT-TOLERANCE IN NANOSCALE CIRCUIT DESIGN

    Get PDF
    As the MOSFET dimensions scale down towards nanoscale level, the reliability of circuits based on these devices decreases. Hence, designing reliable systems using these nano-devices is becoming challenging. Therefore, a mechanism has to be devised that can make the nanoscale systems perform reliably using unreliable circuit components. The solution is fault-tolerant circuit design. Markov Random Field (MRF) is an effective approach that achieves fault-tolerance in integrated circuit design. The previous research on this technique suffers from limitations at the design, simulation and implementation levels. As improvements, the MRF fault-tolerance rules have been validated for a practical circuit example. The simulation framework is extended from thermal to a combination of thermal and random telegraph signal (RTS) noise sources to provide a more rigorous noise environment for the simulation of circuits build on nanoscale technologies. Moreover, an architecture-level improvement has been proposed in the design of previous MRF gates. The redesigned MRF is termed as Improved-MRF. The CMOS, MRF and Improved-MRF designs were simulated under application of highly noisy inputs. On the basis of simulations conducted for several test circuits, it is found that Improved-MRF circuits are 400 whereas MRF circuits are only 10 times more noise-tolerant than the CMOS alternatives. The number of transistors, on the other hand increased from a factor of 9 to 15 from MRF to Improved-MRF respectively (as compared to the CMOS). Therefore, in order to provide a trade-off between reliability and the area overhead required for obtaining a fault-tolerant circuit, a novel parameter called as ‘Reliable Area Index’ (RAI) is introduced in this research work. The value of RAI exceeds around 1.3 and 40 times for MRF and Improved-MRF respectively as compared to CMOS design which makes Improved- MRF to be still 30 times more efficient circuit design than MRF in terms of maintaining a suitable trade-off between reliability and area-consumption of the circuit

    Efficient Evaluation of Probability and Reliability with Digital Integrated Circuits

    Get PDF
    As complementary metal–oxide–semiconductor (CMOS) devices shrink to nanoscale, digital integrated circuits (ICs) are more susceptible to various environmental parameters, such as temperature, supply voltage, wiring, noise, and fabrication process variations. This would reduce the circuit operation reliability (i.e., the probability that a circuit or component is performing its intended logic function). Signal probability (the probability that a digital signal is producing logic 1) is another factor that measures circuit’s dynamic behavior and power dissipation. Research shows that signal probability and reliability within ICs may interact with each other in a complicated way. Generally speaking, as signal probability changes due to input probability variations, so does the signal reliability, and vice versa. This motivates simultaneous evaluation of both for digital ICs towards their performance improvement. However, this evaluation could be a challenge especially for large-scale circuits, due to signal correlations caused by reconvergent fanouts within circuits. Out of two existing evaluation methods, i.e., numerical and analytical methods, the former can give high accuracy level at the cost of expensive computation, while the latter does exactly the opposite. This thesis provides a hybrid solution by taking advantage of both numerical and analytical methods to achieve fast and accurate evaluation for signal probability and reliability for ICs (including both combinational and sequential circuits). First, we develop a categorization-based analytical model for combinational circuits to deal with a variety of signal correlations. For strongly correlated or independent cases, analytical solutions are applied for accurate results. For cases with moderate correlation strength, we use local bitstream simulations for fast estimation. Our simulation results show that the proposed method is hundreds of times faster than Monte-Carlo (MC) simulation, while keeping almost same level of accuracy. We then extend the above method to sequential circuits (with finite-state-machine model) for probability and reliability evaluation. Since sequential circuits can be viewed as an unfolded network of combinational logic, our focus is on how both probability and reliability converge to a final stable state over a certain number of cycles/iterations. To improve the efficiency of this convergence process, we propose a two-step-convergence (TSC) model instead of using traditional step-size based convergence. Simulation results show that the proposed method speeds up the process by around 30% on average compared to traditional method while maintaining a high level of accuracy. Finally, we study the impact of device aging on circuit reliability. After years of operation, CMOS (especially PMOS) devices would experience an increase in their threshold voltage, a phenomenon called Negative Bias Temperature Instability (NBTI). This aging effect leads to the increased gate delay with late arrival time of signals, making circuits temporally unreliable. Threshold voltage changes may also negatively affect the probability that transistors perform intended logical operations, causing them spatially more unreliable. Our investigation focuses on evaluation of the overall reliability at circuit-level by considering both spatial (solely considering the correctness of signal logic values) and temporal (considering the signal arrival time to catch up sampling action) aspects of it. This would help circuit designers predict the circuit lifetime. Simulations on benchmark circuits show that the reliability degradation rate due to aging effect ranges from 1.5% to 8.2% over one-year period, depending on specific circuits

    Single Electron Devices and Circuit Architectures: Modeling Techniques, Dynamic Characteristics, and Reliability Analysis

    Get PDF
    The Single Electron (SE) technology is an important approach to enabling further feature size reduction and circuit performance improvement. However, new methods are required for device modeling, circuit behavior description, and reliability analysis with this technology due to its unique operation mechanism. In this thesis, a new macro-model of SE turnstile is developed to describe its physical characteristics for large-scale circuit simulation and design. Based on this model, several novel circuit architectures are proposed and implemented to further demonstrate the advantages of SE technique. The dynamic behavior of SE circuits, which is different from their CMOS counterpart, is also investigated using a statistical method. With the unreliable feature of SE devices in mind, a fast and recursive algorithm is developed to evaluate the reliability of SE logic circuits in a more efficient and effective manner

    Designing Accurate and Low-Cost Stochastic Circuits.

    Full text link
    Stochastic computing (SC) is an unconventional computing approach that processes data represented by pseudo-random bit-streams called stochastic numbers (SNs). It enables arithmetic functions to be implemented by tiny, low-power logic circuits, and is highly error-tolerant. These properties make SC practical for applications that need massive parallelism or operate in noisy environments where conventional binary designs are too costly or too unreliable. SC has recently come to be seen as an attractive choice for tasks such as biomedical image processing and decoding complex error-correcting codes. Despite its desirable properties, SC has features that limit its usefulness, including insufficient accuracy and an inadequate design theory. Accuracy is especially vulnerable to correlation among interacting SNs and to the random fluctuations inherent in SC’s data representation. This dissertation examines the major factors affecting accuracy using analytical and experimental approaches based on probability theory and circuit simulation, respectively. We devise methods to quantify the error effects in stochastic circuits by means of probabilistic transfer matrices and Bernouilli processes. These methods make it possible to compare the impact of errors on conventional and stochastic circuits under various conditions. We then analyze correlation in detail and show that correlation-induced errors can be reduced by the careful insertion of delay elements, a de-correlation technique called isolation. Noting that different logic functions can have the same stochastic behavior when constant SNs are applied to their inputs, we show how to partition logic functions into stochastic equivalence classes (SECs). We derive a procedure for identifying SECs, and apply SEC concepts to the synthesis and optimization of stochastic circuits. While addition, subtraction and multiplication have well-known and simple SC implementations, this is not true for division. We study stochastic division methods and propose a new type of stochastic divider that combines low cost with high accuracy. Finally, we turn to the design of general stochastic circuits and investigate a desirable property of SNs called monotonic progressive precision (MPP) whereby accuracy increases steadily with bit-stream length. We develop an SC design technique which produces results that are accurate and have good MPP. The dissertation concludes with some ideas for future research.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133255/1/tehsuan_1.pd

    Soft Error Analysis and Mitigation at High Abstraction Levels

    Get PDF
    Radiation-induced soft errors, as one of the major reliability challenges in future technology nodes, have to be carefully taken into consideration in the design space exploration. This thesis presents several novel and efficient techniques for soft error evaluation and mitigation at high abstract levels, i.e. from register transfer level up to behavioral algorithmic level. The effectiveness of proposed techniques is demonstrated with extensive synthesis experiments
    corecore