16,145 research outputs found

    Daily weather direct readout microprocessor study

    Get PDF
    The work completed included a study of the requirements and hardware and software implementation techniques for NIMBUS ESMR and TWERLE direct readout applications using microprocessors. Many microprocessors were studied for this application. Because of the available Interdata development capabilities, it was concluded that future implementations be on an Interdata microprocessor which was found adequate for the task

    Integrated GHz silicon photonic interconnect with micrometer-scale modulators and detectors

    Full text link
    We report an optical link on silicon using micrometer-scale ring-resonator enhanced silicon modulators and waveguide-integrated germanium photodetectors. We show 3 Gbps operation of the link with 0.5 V modulator voltage swing and 1.0 V detector bias. The total energy consumption for such a link is estimated to be ~120 fJ/bit. Such compact and low power monolithic link is an essential step towards large-scale on-chip optical interconnects for future microprocessors

    Micro-threading and FPGA implementation of a RISC microprocessor : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    Appendix E removed due to copyright restrictions. Articles are available in the print copy held in the libraryThis thesis is the outcome of research in two areas of computer technology: microprocessor and multi-processor architectures (specifically from the perspective of how differently they tolerate highly-latent and non-deterministic events), and hardware design of complex digital systems containing both datapath and control (particularly microprocessors). This thesis starts by pointing out that in order to achieve high processing speeds, current popular superscalar microprocessors (e.g. Intel Pentiums, Digital Alpha, etc) rely heavily on the technique of speculating the outcome of instruction flow in order to predict the behaviour of non-deterministic computing operations (as in loading operands from high-latency memory into the processor). This is fine only if the speculation is correct. But, what if it isn't? If the speculation fails, this would mean that the processor has to abandon its current decision (which now proved to be the wrong one) for the instruction flow path taken and to start all over again with the other path (the actual correct one). This is a waste of valuable processing time and hardware resources and a reduction of performance when speculation fails. Therefore, these processors can achieve high performance only when the majority of speculations are successful (being able to predict the right path). In an attempt to overcome the above shortcomings, the first part of this thesis is an investigation of the novel vector micro-threading architecture as an alternative approach to the current superscalar-based speculative microprocessor designs. Micro-threading is based on the not-so-novel multithreading technique, which avoids speculation altogether and instead, starts running a different thread of instructions while waiting for the non-determinism to be resolved. This utilizes the chip resources more efficiently without waste of any processing power. The rest of this thesis focuses on the baseline RISC processor platform, the MIPS R2000, which is reviewed first then partially synthesized from the RTL (Register Transfer Level) description using VHDL and then simulated and tested. This is conducted in order for future research to build upon and add the micro-threading architectural add-ons and modifications. Keywords: Micro-threading, Latency Tolerance, FPGA Synthesis, RISC Architecture, MIPS R2000 processor, VHDL

    The AXIOM software layers

    Get PDF
    AXIOM project aims at developing a heterogeneous computing board (SMP-FPGA).The Software Layers developed at the AXIOM project are explained.OmpSs provides an easy way to execute heterogeneous codes in multiple cores. People and objects will soon share the same digital network for information exchange in a world named as the age of the cyber-physical systems. The general expectation is that people and systems will interact in real-time. This poses pressure onto systems design to support increasing demands on computational power, while keeping a low power envelop. Additionally, modular scaling and easy programmability are also important to ensure these systems to become widespread. The whole set of expectations impose scientific and technological challenges that need to be properly addressed.The AXIOM project (Agile, eXtensible, fast I/O Module) will research new hardware/software architectures for cyber-physical systems to meet such expectations. The technical approach aims at solving fundamental problems to enable easy programmability of heterogeneous multi-core multi-board systems. AXIOM proposes the use of the task-based OmpSs programming model, leveraging low-level communication interfaces provided by the hardware. Modular scalability will be possible thanks to a fast interconnect embedded into each module. To this aim, an innovative ARM and FPGA-based board will be designed, with enhanced capabilities for interfacing with the physical world. Its effectiveness will be demonstrated with key scenarios such as Smart Video-Surveillance and Smart Living/Home (domotics).Peer ReviewedPostprint (author's final draft

    A technique for incorporating the NASA spacelab payload dedicated experiment processor software into the simulation system for the payload crew training complex

    Get PDF
    The feasibility of some off-the-shelf microprocessors and state-of-art software is assessed (1) as a development system for the principle investigator (pi) in the design of the experiment model, (2) as an example of available technology application for future PI's experiments, (3) as a system capable of being interactive in the PCTC's simulation of the dedicated experiment processor (DEP), preferably by bringing the PI's DEP software directly into the simulation model, (4) as a system having bus compatibility with host VAX simulation computers, (5) as a system readily interfaced with mock-up panels and information displays, and (6) as a functional system for post mission data analysis

    The FTC's Challenge to Intel's Cross-Licensing Practices

    Get PDF
    After an investigation lasting several months, in June 1998 the Federal Trade Commission brought an antitrust lawsuit against Intel Corporation based on Intel's conduct towards Intergraph, and similar conduct towards Digital Equipment Corporation and Compaq, all in the context of disputes where Intel was accused of patent infringement. The FTC charged that Intel's practices were an abuse of Intel's monopoly position in microprocessors. Is Intel's conduct anti-competitive and thus illegal under the antitrust laws? That is the central question explored in this paper. An introductory section provides some background for the case by discussing the tension between intellectual property rights and antitrust law, a tension that is evident in the FTC's dispute with Intel, and by describing the role of patents in the semiconductor industry. Section 3 provides a succinct summary of the facts surrounding Intel's conduct in each of the three patent disputes identified by the FTC. Section 4 explains the FTC's theory of how Intel's conduct was anti-competitive. Section 5 presents Intel's response. Section 6 describes the settlement reached between the FTC and Intel. The final section discusses legal and economic developments since the case was settled and remarks on the lasting implications of the Intel case.
    corecore