1,050 research outputs found

    True random number generator implemented in 130 nm CMOS nanotechnology

    Get PDF
    Random generators systems have the capacity to generate cryptographic keys which, when mixed with the in formation, hide it in an efficient and timely manner. There are two categories of RNG, being truly random (TRNG) or pseudorandom (PRNG). To study the entropy source based on the noise of an oscillator, and to achieve that, an RNG circuit was designed to have a low power consumption, a high randomness and a low cost and area usage. The chosen architecture for this paper is a hybrid RNG, which uses oscillators and a chaotic circuit to generate the random bits. With the simulation of the circuit, it was found to be at the objectives mark, having a low power consumption of 1.19 mW, a high throughput of 25 Mbit/s and an energy per bit of 47.6 pJ/bit. However, due to limitations with the simulation, it wasn’t possible to run all the statistical tests, although all the ran tests were passed.info:eu-repo/semantics/publishedVersio

    A Ring Oscillator Based Truly Random Number Generator

    Get PDF
    Communication security is a very important part of modern life. A crucial aspect of security is the ability to identify with near 100% certainty who is on the other side of a connection. This problem can be overcome through the use of random number generators, which create unique identities for each person in a network. The effectiveness of an identity is directly proportional to how random a generator is. The speed at which a random number can be delivered is a critical factor in the design of a random number generator. This thesis covers the design and fabrication of three ring oscillator based truly random number generators, the first two of which were fabricated in 0.13µ m CMOS technology. The randomness from this type of random number generator originates from phase noise in a ring oscillator. The second and third ring oscillators were designed to have a low slew rate at the inverter switching threshold. The outputs of these designs showed vast increases in timing jitter compared to the first design. The third design exhibited improved randomness with respect to the second design

    A Tale of Twin Primitives: Single-chip Solution for PUFs and TRNGs

    Get PDF
    Physically Unclonable Functions (PUFs) and True Random Number Generators (TRNGs) are two highly useful hardware primitives to build up the root-of-trust for an embedded device. PUFs are designed to offer repetitive and instance-specific randomness, whereas TRNGs are expected to be invariably random. In this paper, we present a dual-mode PUF-TRNG design that utilises two different hardware-intrinsic properties, i.e. oscillation frequency of the Transition Effect Ring Oscillator (TERO) cell and the propagation delay of a buffer within the cell to serve the purpose of both PUF and TRNG depending on the exact requirement of the application. The PUF design is also proposed to have a built-in resistance to machine learning (ML) and deep learning (DL) attacks, whereas the TRNG exhibits sufficient randomness

    Chaos in a ring circuit

    Get PDF
    A ring-shaped logic circuit is proposed here as a robust design for a True Random Number Generator (TRNG). Most existing TRNGs rely on physical noise as a source of randomness, where the underlying idealized deterministic system is simply oscillatory. The design proposed here is based on chaotic dynamics and therefore intrinsically displays random behavior, even in the ideal noise-free situation. The paper presents several mathematical models for the circuit having different levels of detail. They take the form of differential equations using steep sigmoid terms for the transfer functions of logic gates. A large part of the analysis is concerned with the hard step-function limit, leading to a model known in mathematical biology as a Glass network. In this framework, an underlying discrete structure (a state space diagram) is used to describe the likely structure of the global attractor for this system. The latter takes the form of intertwined periodic paths, along which trajectories alternate unpredictably. It is also invariant under the action of the cyclic group. A combination of analytical results and numerical investigations confirms the occurrence of symmetric chaos in this system, which when implemented in (noisy) hardware, should therefore serve as a robust TRNG

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Space Shuttle/TDRSS communication and tracking systems analysis

    Get PDF
    In order to evaluate the technical and operational problem areas and provide a recommendation, the enhancements to the Tracking and Data Delay Satellite System (TDRSS) and Shuttle must be evaluated through simulation and analysis. These enhancement techniques must first be characterized, then modeled mathematically, and finally updated into LinCsim (analytical simulation package). The LinCsim package can then be used as an evaluation tool. Three areas of potential enhancements were identified: shuttle payload accommodations, TDRSS SSA and KSA services, and shuttle tracking system and navigation sensors. Recommendations for each area were discussed

    Ultra Low-Power Frequency Synthesizers for Duty Cycled IoT radios

    Get PDF
    Internet of Things (IoT), which is one of the main talking points in the electronics industry today, consists of a number of highly miniaturized sensors and actuators which sense the physical environment around us and communicate that information to a central information hub for further processing. This agglomeration of miniaturized sensors helps the system to be deployed in previously impossible arenas such as healthcare (Body Area Networks - BAN), industrial automation, real-time monitoring environmental parameters and so on; thereby greatly improving the quality of life. Since the IoT devices are usually untethered, their energy sources are limited (typically battery powered or energy scavenging) and hence have to consume very low power. Today's IoT systems employ radios that use communication protocols like Bluetooth Smart; which means that they communicate at data rates of a few hundred kb/s to a few Mb/s while consuming around a few mW of power. Even though the power dissipation of these radios have been decreasing steadily over the years, they seem to have reached a lower limit in the recent times. Hence, there is a need to explore other avenues to further reduce this dissipation so as to further improve the energy autonomy of the IoT node. Duty cycling has emerged as a promising alternative in this sense since it involves radios transmitting very short bursts of data at high rates and being asleep the rest of the time. In addition, high data rates proffer the added advantage of reducing network congestion which has become a major problem in IoT owing to the increase in the number of sensor nodes as well as the volume of data they send. But, as the average power (energy) dissipated decreases due to duty cycling, the energy overhead associated with the start-up phase of the radio becomes comparable with the former. Therefore, in order to take full advantage of duty cycling, the radio should be capable of being turned ON/OFF almost instantaneously. Furthermore, the radio of the future should also be able to support easy frequency hopping to improve the system efficiency from an interference point of view. In other words, in addition to high data rate capability, the next generation radios must also be highly agile and have a low energy overhead. All these factors viz. data rate, agility and overhead are mainly dependent on the radio's frequency synthesizer and therefore emphasis needs to be laid on developing new synthesizer architectures which are also amenable to technology scaling. This thesis deals with the evolution of one such all-digital frequency synthesizer; with each step dealing with one of the aforementioned issues. In order to reduce the energy overhead of the synthesizer, FBAR resonators (which are a class of MEMS resonators) are used as the frequency reference instead of a traditional quartz crystal. The FBAR resonators aid the design of fast-startup oscillators as opposed to the long latency associated with the start-up of the crystal oscillator. In addition, the frequency stability of the FBAR lends itself to open-loop architecture which can support very high data rates. Another advantage of the open-loop architecture is the frequency agility which aids easy channel switching for multi-hop architectures, as demonstrated in this thesis

    Advanced Syncom, volume 1 Summary report

    Get PDF
    Synchronous communications satellite configuration, instrumentation, handling and test equipment, and systems desig
    • …
    corecore