26,340 research outputs found

    Intelligent editor/printer enhancements

    Get PDF
    Microprocessor support hardware, software, and cross assemblers relating to the Motorola 6800 and 6809 process systems were developed. Pinter controller and intelligent CRT development are discussed. The user's manual, design specifications for the MC6809 version of the intelligent printer controller card, and a 132-character by 64-line intelligent CRT display system using a Motorola 6809 MPU, and a one-line assembler and disassembler are provided

    Application-Based Analysis of Register File Criticality for Reliability Assessment in Embedded Microprocessors

    Get PDF
    There is an increasing concern to reduce the cost and overheads during the development of reliable systems. Selective protection of most critical parts of the systems represents a viable solution to obtain a high level of reliability at a fraction of the cost. In particular to design a selective fault mitigation strategy for processor-based systems, it is mandatory to identify and prioritize the most vulnerable registers in the register file as best candidates to be protected (hardened). This paper presents an application-based metric to estimate the criticality of each register from the microprocessor register file in microprocessor-based systems. The proposed metric relies on the combination of three different criteria based on common features of executed applications. The applicability and accuracy of our proposal have been evaluated in a set of applications running in different microprocessors. Results show a significant improvement in accuracy compared to previous approaches and regardless of the underlying architecture.This work was funded in part by the Spanish Ministry of Education, Culture and Sports with the project “Developing hybrid fault tolerance techniques for embedded microprocessors” (PHB2012-0158-PC)

    Vertical Integration and Dis-integration of Computer Firms: A History Friendly Model of the Co-evolution of the Computer and Semiconductor Industries

    Get PDF
    In this paper we present a history-friendly model of the changing vertical scope of computer firms during the evolution of the computer and semiconductor industries. The model is "history friendly", in that it attempts at replicating some basic, stylized qualitative features of the evolution of vertical integration on the basis of the causal mechanisms and processes which we believe can explain the history. The specific question addressed in the model is set in the context of dynamic and uncertain technological and market environments, characterized by periods of technological revolutions punctuating periods of relative technological stability and smooth technical progress. The model illustrates how the patterns of vertical integration and specialization in the computer industry change as a function of the evolving levels and distribution of firms' capabilities over time and how they depend on the co-evolution of the upstream and downstream sectors. Specific conditions in each of these markets - the size of the external market, the magnitude of the technological discontinuities, the lock-in effects in demand - exert critical effects and feedbacks on market structure and on the vertical scope of firms as time goes by

    Cross-layer system reliability assessment framework for hardware faults

    Get PDF
    System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft

    Modernization of the Data Processing Device for the Boron Concentration Meter

    Get PDF
    Currently all nuclear power plants with VVER -reactors used data measuring systems called a Boron Concentration Meter (NAR). They are necessary for the boron control implementation. NAR consists of sensors and auxiliary devices, and it includes the device of storage and data processing (UNO). Both the rapid evolution of computer architectures, and associated microprocessor hardware updates in the electronics market create a necessity of a new device development. The main design goals both are to optimize the hardware solutions and to use a new software and algorithmic capabilities. However, another development goal is not only the upgrading of circuitry and software, but the optimization of calibration process in the NAR and testing it

    Micro-threading and FPGA implementation of a RISC microprocessor : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    Appendix E removed due to copyright restrictions. Articles are available in the print copy held in the libraryThis thesis is the outcome of research in two areas of computer technology: microprocessor and multi-processor architectures (specifically from the perspective of how differently they tolerate highly-latent and non-deterministic events), and hardware design of complex digital systems containing both datapath and control (particularly microprocessors). This thesis starts by pointing out that in order to achieve high processing speeds, current popular superscalar microprocessors (e.g. Intel Pentiums, Digital Alpha, etc) rely heavily on the technique of speculating the outcome of instruction flow in order to predict the behaviour of non-deterministic computing operations (as in loading operands from high-latency memory into the processor). This is fine only if the speculation is correct. But, what if it isn't? If the speculation fails, this would mean that the processor has to abandon its current decision (which now proved to be the wrong one) for the instruction flow path taken and to start all over again with the other path (the actual correct one). This is a waste of valuable processing time and hardware resources and a reduction of performance when speculation fails. Therefore, these processors can achieve high performance only when the majority of speculations are successful (being able to predict the right path). In an attempt to overcome the above shortcomings, the first part of this thesis is an investigation of the novel vector micro-threading architecture as an alternative approach to the current superscalar-based speculative microprocessor designs. Micro-threading is based on the not-so-novel multithreading technique, which avoids speculation altogether and instead, starts running a different thread of instructions while waiting for the non-determinism to be resolved. This utilizes the chip resources more efficiently without waste of any processing power. The rest of this thesis focuses on the baseline RISC processor platform, the MIPS R2000, which is reviewed first then partially synthesized from the RTL (Register Transfer Level) description using VHDL and then simulated and tested. This is conducted in order for future research to build upon and add the micro-threading architectural add-ons and modifications. Keywords: Micro-threading, Latency Tolerance, FPGA Synthesis, RISC Architecture, MIPS R2000 processor, VHDL

    A user configurable data acquisition and signal processing system for high-rate, high channel count applications

    Get PDF
    Real-time signal processing in plasma fusion experiments is required for control and for data reduction as plasma pulse times grow longer. The development time and cost for these high-rate, multichannel signal processing systems can be significant. This paper proposes a new digital signal processing (DSP) platform for the data acquisition system that will allow users to easily customize real-time signal processing systems to meet their individual requirements. The D-TACQ reconfigurable user in-line DSP (DRUID) system carries out the signal processing tasks in hardware co-processors (CPs) implemented in an FPGA, with an embedded microprocessor (μP) for control. In the fully developed platform, users will be able to choose co-processors from a library and configure programmable parameters through the μP to meet their requirements. The DRUID system is implemented on a Spartan 6 FPGA, on the new rear transition module (RTM-T), a field upgrade to existing D-TACQ digitizers. As proof of concept, a multiply-accumulate (MAC) co-processor has been developed, which can be configured as a digital chopper-integrator for long pulse magnetic fusion devices. The DRUID platform allows users to set options for the integrator, such as the number of masking samples. Results from the digital integrator are presented for a data acquisition system with 96 channels simultaneously acquiring data at 500 kSamples/s per channel

    A Fault Injection Environment for Microprocessor-based Board

    Get PDF
    Evaluating the faulty behaviour of low-cost microprocessor-based boards is an increasingly important issue, due to their usage in many safety critical systems. To address this issue, the paper describes a software-implemented fault injection system based on the trace exception mode available in most microprocessors. The architecture of the complete fault injection environment is proposed, integrating modules for generating a fault list, for performing their injection and for gathering the results, respectively. Data gathered from some sample benchmark applications are presented The main advantages of the approach are low cost, good portability, and high efficienc

    EXFI: a low cost Fault Injection System for embedded Microprocessor-based Boards

    Get PDF
    Evaluating the faulty behavior of low-cost embedded microprocessor-based boards is an increasingly important issue, due to their adoption in many safety critical systems. The architecture of a complete Fault Injection environment is proposed, integrating a module for generating a collapsed list of faults, and another for performing their injection and gathering the results. To address this issue, the paper describes a software-implemented Fault Injection approach based on the Trace Exception Mode available in most microprocessors. The authors describe EXFI, a prototypical system implementing the approach, and provide data about some sample benchmark applications. The main advantages of EXFI are the low cost, the good portability, and the high efficienc
    corecore