2,329 research outputs found

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Development of a portable time-domain system for diffuse optical tomography of the newborn infant brain

    Get PDF
    Conditions such as hypoxic-ischaemic encephalopathy (HIE) and perinatal arterial ischaemic stroke (PAIS) are causes of lifelong neurodisability in a few hundred infants born in the UK each year. Early diagnosis and treatment are key, but no effective bedside detection and monitoring technology is available. Non-invasive, near-infrared techniques have been explored for several decades, but progress has been inhibited by the lack of a portable technology, and intensity measurements, which are strongly sensitive to uncertain and variable coupling of light sources and detector to the scalp. A technique known as time domain diffuse optical tomography (TD-DOT) uses measurements of photon flight times between sources and detectors placed on the scalp. Mean flight time is largely insensitive to the coupling and variation in mean flight time can reveal spatial variation in blood volume and oxygenation in regions of brain sampled by the measurements. While the cost, size and high power consumption of such technology have hitherto prevented development of a portable imaging system, recent advances in silicon technology are enabling portable and low-power TD-DOT devices to be built. A prototype TD-DOT system is proposed and demonstrated, with the long-term aim to design a portable system based on independent modules, each supporting a time-of-flight detector and a pulsed source. The operation is demonstrated of components that can be integrated in a portable system: silicon photodetectors, integrated circuit-based signal conditioning and time detection -- built using a combination of off-the-shelf components and reconfigurable hardware, standard computer interfaces, and data acquisition and calibration software. The only external elements are a PC and a pulsed laser source. This thesis describes the design process, and results are reported on the performance of a 2-channel system with online histogram generation, used for phantom imaging. Possible future development of the hardware is also discussed

    Development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements

    Get PDF
    The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation

    Non-invasive methods for testing the integrity of bulkheads and/or deckheads during a fire

    Get PDF
    The project aimed to investigate and evaluate a possible non-invasive test method to assess the integrity of bulkheads and/or deckheads during a fire. Currently there is not an accurate method for determining the integrity of a bulkheads and/or deckheads during a fire on-board a ship. This leads to a higher risk being placed on fire fighters. A literature review was conducted, following which it was determined that Air Coupled Ultrasonics (ACU) was the most viable non-invasive test method for the project. Optimisation of the ACU test method was conducted for use under fire conditions. Aluminium, GFRP and CFRP plates were placed under fire conditions using a LPG bottle and burner. The plates underwent ACU testing at 10ºC increments. Results were evaluated in relation to suitability for use of the chosen non-invasive test method, impact of temperature on results and material properties, and structural integrity issues. The results showed the lamb wave velocity greatly changed as the elastic properties of the material changed due to the thermal loading on the plates caused by the fire. Even though the results showed that the lamb wave velocity greatly changed as the material underwent a significant change in elastic properties, before ACU is suitable for in-service implementation, further research and development is required. Further research and development is required into ultrasonic transducer bandwidth, waveform generator pulse, oscilloscope, depth of penetration and portability to ensure accuracy and reliability in results

    Index to NASA Tech Briefs, January - June 1967

    Get PDF
    Technological innovations for January-June 1967, abstracts and subject inde

    Research Reports: 1984 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    A NASA/ASEE Summer Faulty Fellowship Program was conducted at the Marshall Space Flight Center (MSFC). The basic objectives of the programs are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA Centers. The Faculty Fellows spent ten weeks at MSFC engaged in a research project compatible with their interests and background and worked in collaboration with a NASA/MSFC colleague. This document is a compilation of Fellows' reports on their research during the summer of 1984. Topics covered include: (1) data base management; (2) computational fluid dynamics; (3) space debris; (4) X-ray gratings; (5) atomic oxygen exposure; (6) protective coatings for SSME; (7) cryogenics; (8) thermal analysis measurements; (9) solar wind modelling; and (10) binary systems

    NASA Tech Briefs, September 2008

    Get PDF
    Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures

    Index to 1986 NASA Tech Briefs, volume 11, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1986 Tech Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences
    • …
    corecore