567 research outputs found

    Design of a microprocessor-based Control, Interface and Monitoring (CIM unit for turbine engine controls research

    Get PDF
    High speed minicomputers were used in the past to implement advanced digital control algorithms for turbine engines. These minicomputers are typically large and expensive. It is desirable for a number of reasons to use microprocessor-based systems for future controls research. They are relatively compact, inexpensive, and are representative of the hardware that would be used for actual engine-mounted controls. The Control, Interface, and Monitoring Unit (CIM) contains a microprocessor-based controls computer, necessary interface hardware and a system to monitor while it is running an engine. It is presently being used to evaluate an advanced turbofan engine control algorithm

    Design description of a microprocessor based Engine Monitoring and Control unit (EMAC) for small turboshaft

    Get PDF
    Research programs have demonstrated that digital electronic controls are more suitable for advanced aircraft/rotorcraft turbine engine systems than hydromechanical controls. Commercially available microprocessors are believed to have the speed and computational capability required for implementing advanced digital control algorithms. Thus, it is desirable to demonstrate that off-the-shelf microprocessors are indeed capable of performing real time control of advanced gas turbine engines. The engine monitoring and control (EMAC) unit was designed and fabricated specifically to meet the requirements of an advanced gas turbine engine control system. The EMAC unit is fully operational in the Army/NASA small turboshaft engine digital research program

    Micro-threading and FPGA implementation of a RISC microprocessor : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    Appendix E removed due to copyright restrictions. Articles are available in the print copy held in the libraryThis thesis is the outcome of research in two areas of computer technology: microprocessor and multi-processor architectures (specifically from the perspective of how differently they tolerate highly-latent and non-deterministic events), and hardware design of complex digital systems containing both datapath and control (particularly microprocessors). This thesis starts by pointing out that in order to achieve high processing speeds, current popular superscalar microprocessors (e.g. Intel Pentiums, Digital Alpha, etc) rely heavily on the technique of speculating the outcome of instruction flow in order to predict the behaviour of non-deterministic computing operations (as in loading operands from high-latency memory into the processor). This is fine only if the speculation is correct. But, what if it isn't? If the speculation fails, this would mean that the processor has to abandon its current decision (which now proved to be the wrong one) for the instruction flow path taken and to start all over again with the other path (the actual correct one). This is a waste of valuable processing time and hardware resources and a reduction of performance when speculation fails. Therefore, these processors can achieve high performance only when the majority of speculations are successful (being able to predict the right path). In an attempt to overcome the above shortcomings, the first part of this thesis is an investigation of the novel vector micro-threading architecture as an alternative approach to the current superscalar-based speculative microprocessor designs. Micro-threading is based on the not-so-novel multithreading technique, which avoids speculation altogether and instead, starts running a different thread of instructions while waiting for the non-determinism to be resolved. This utilizes the chip resources more efficiently without waste of any processing power. The rest of this thesis focuses on the baseline RISC processor platform, the MIPS R2000, which is reviewed first then partially synthesized from the RTL (Register Transfer Level) description using VHDL and then simulated and tested. This is conducted in order for future research to build upon and add the micro-threading architectural add-ons and modifications. Keywords: Micro-threading, Latency Tolerance, FPGA Synthesis, RISC Architecture, MIPS R2000 processor, VHDL

    A multi-family multi-processor education and development system.

    Get PDF

    Core component choices in single-user computer systems : a home office user\u27s perspective

    Get PDF
    The home office is a rapidly growing segment of the business environment. The trend toward two-income families and concerns over quality of life have made the office at home increasingly attractive alternative business style. The evolution of technology during the past ten years has opened up a broad array of choices. The introduction of the IBM personal computer in the fall of 1981 provided the technological nucleus. Other office products aimed at the individual user such as personal copiers, facsimile machines, smart typewriters, and multi-function telecommunications products have grown around it. The evolution of personal computer technology has been accelerating since its introduction; the home office user has a broad and confusing array of choices at varying levels of technological development and intercompatibility

    Autonomous Attitude Determination System (AADS). Volume 1: System description

    Get PDF
    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution

    Aerospace Applications of Microprocessors

    Get PDF
    An assessment of the state of microprocessor applications is presented. Current and future requirements and associated technological advances which allow effective exploitation in aerospace applications are discussed

    The Application of Microprocessor Technology in Flight Simulation

    Get PDF
    At periods of time which historically have occurred at ten-year intervals it has been necessary for training simulator manufacturers to break completely with their past practices and evolve new simulator architectures in order to deal with the increasing capabilities required of the simulator. It has happened that in each of the two noteworthy preceding cases, such a decision to make a basic change in simulator architecture has coincided with the availability of new technology with which to implement the change
    • …
    corecore