5,029 research outputs found

    Hybrid computer Monte-Carlo techniques

    Get PDF
    Hybrid analog-digital computer systems for Monte Carlo method application

    Analog, hybrid, and digital simulation

    Get PDF
    Analog, hybrid, and digital computerized simulation technique

    FPGA Implementation of the Front-End of a DOCSIS 3.0 Receiver

    Get PDF
    The introduction of cable television (CATV) in the 1940s and 1950s has significantly influenced communications technology. Originally supplying only one-way television programming, the CATV industry recognized the potential of two-way communications. Starting with the introduction of pay-per view services in the 1980s, two-way communications over CATV networks eventually expanded into supplying internet access services. The increased demand for CATV services, and thus the increased demand for CATV equipment, has led the CATV industry to develop interoperability standards. The primary standard now used by the CATV industry is the Data Over Cable Service Specification (DOCSIS). DOCSIS defines both the upstream (data towards the CATV provider) and downstream (data towards the CATV customer) transmission channels. This includes specifications for the modulators and demodulators used in these channels. The number of manufacturers of CATV modulators and demodulators has greatly increased over the last twenty years and continues to do so. As the number of competitive CATV equipment suppliers increases, these manufacturers must look to ways to remain competitive by reducing time-to-market and costs associated with equipment design, as well as allowing their designs to be flexible so that they may adapt to the improvements in DOCSIS. In the past, manufacturers have primarily used Application Specific Integrated Circuits (ASICs) to implement digital hardware designs for CATV equipment. ASICs have a very high initial setup cost and do not allow for system modifications without a complete redesign. Recently, Field Programmable Gate Array (FPGA) technology has been introduced that allows manufacturers to both modify their designed digital hardware structures without a complete physical hardware redesign, as well as providing a reduced initial setup cost. Although in the long term, ASICs provide a cheaper alternative to FPGAs when produced in quantity, FPGAs provide quicker time-to-market in new product development and allow changes to made after initial release. This ability to change designs after release and the quicker time-to-market has led manufacturers to adopt FPGAs in new products. A critical component in the upstream channel of a DOCSIS compliant system is the Quadrature Amplitude Modulated (QAM) receiver. The data received at the QAM receiver have undergone several impairments including additive noise, timing offset, and frequency and phase mismatches between the transmitted modulated signal and the signal received at the demodulator. It is the function of the front-end of the receiver to correct for these impairments. This thesis presents methods for, and an example of, the design and implementation of a DOCSIS compliant QAM receiver front-end that corrects for timing, phase and frequency impairments experienced in the upstream communication channel when additive noise is present. The circuits presented are designed and implemented to reduce hardware costs when using FPGA technology. In addition, the circuits designed do not use proprietary logic, which gives designers more flexibility when implementing their own demodulator front-end circuitry. The FPGA implementation presented in this thesis achieves an average MER of 54.3 dB in a no-noise channel and close to 31 dB MER in a 25 dBc AWGN channel. The overall design uses 65 dedicated 18-bit by 18-bit multipliers and 2,970 bytes of RAM to implement the digital front-end of the receiver

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from ≈\approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    The Logic of Random Pulses: Stochastic Computing.

    Full text link
    Recent developments in the field of electronics have produced nano-scale devices whose operation can only be described in probabilistic terms. In contrast with the conventional deterministic computing that has dominated the digital world for decades, we investigate a fundamentally different technique that is probabilistic by nature, namely, stochastic computing (SC). In SC, numbers are represented by bit-streams of 0's and 1's, in which the probability of seeing a 1 denotes the value of the number. The main benefit of SC is that complicated arithmetic computation can be performed by simple logic circuits. For example, a single (logic) AND gate performs multiplication. The dissertation begins with a comprehensive survey of SC and its applications. We highlight its main challenges, which include long computation time and low accuracy, as well as the lack of general design methods. We then address some of the more important challenges. We introduce a new SC design method, called STRAUSS, that generates efficient SC circuits for arbitrary target functions. We then address the problems arising from correlation among stochastic numbers (SNs). In particular, we show that, contrary to general belief, correlation can sometimes serve as a resource in SC design. We also show that unlike conventional circuits, SC circuits can tolerate high error rates and are hence useful in some new applications that involve nondeterministic behavior in the underlying circuitry. Finally, we show how SC's properties can be exploited in the design of an efficient vision chip that is suitable for retinal implants. In particular, we show that SC circuits can directly operate on signals with neural encoding, which eliminates the need for data conversion.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113561/1/alaghi_1.pd

    Cross-Layer Resiliency Modeling and Optimization: A Device to Circuit Approach

    Get PDF
    The never ending demand for higher performance and lower power consumption pushes the VLSI industry to further scale the technology down. However, further downscaling of technology at nano-scale leads to major challenges. Reduced reliability is one of them, arising from multiple sources e.g. runtime variations, process variation, and transient errors. The objective of this thesis is to tackle unreliability with a cross layer approach from device up to circuit level
    • …
    corecore