244 research outputs found

    The Limitations on the Protection of Program Works Under Japanese Copyright Law

    Get PDF
    This article examines these problems in the light of the program language, rule, and algorithm limitations on program protection under the Japanese Copyright Act. Section II sets forth the relevant statutory language, and Sections III and IV apply the program language and rule limitations to operating systems software and microcode. Section V considers the scope of protection under Japanese law in applications programs under the algorithm limitation on program protection. Finally, Section VI takes up the problem of whether copying for purposes of reverse engineering can be justified under the Act

    A machine-independent microprogram development system

    Get PDF
    The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown

    Többprocesszoros, osztott intelligenciĂĄjĂș grafikus rendszerek tervezĂ©si Ă©s megvalĂłsĂ­tĂĄsi kĂ©rdĂ©sei : KandidĂĄtusi Ă©rtekezĂ©s

    Get PDF

    Parallel process placement

    Get PDF
    This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware consideration

    Loop pipelining with resource and timing constraints

    Get PDF
    Developing efficient programs for many of the current parallel computers is not easy due to the architectural complexity of those machines. The wide variety of machine organizations often makes it more difficult to port an existing program than to reprogram it completely. Therefore, powerful translators are necessary to generate effective code and free the programmer from concerns about the specific characteristics of the target machine. This work focuses on techniques to be used by an important class of translators, whose objective is to transform sequential programs into equivalent more parallel programs. The transformations are performed at instruction level in order to exploit low level parallelism and increase memory locality.Most of the current applications are programmed in languages which do not allow us to express parallelism between high-level sentences (as Pascal, C or Fortran). Furthermore, a lot of applications written ten or more years ago are still used today, and it is not feasible to rewrite such applications for many reasons (not only technical reasons, but also economic ones). Translators enable programmers to write the application in a familiar sequential programming language, without concerning their selves with the architecture of the target machine. Current compilers for parallel architectures not only translate a program written on a high-level language to the appropriate machine language, but also perform some transformations in the final code in order to execute the program in a more parallel way. The transformations improve the performance in the execution of the program by making use of the knowledge that the compiler has about the machine architecture. The semantics of the program remain intact after any transformation.Experiments show that limiting parallelization to basic blocks not included in loops limits maximum speedup. This is because loops often comprise a large portion of the parallelism available to be exploited in a program. For this reason, a lot of effort has been devoted in the recent years to parallelize loop execution. Several parallel computer architectures and compilation techniques have been proposed to exploit such a parallelism at different granularities. Multiprocessors exploit coarse grained parallelism by distributing entire loop iterations to different processors. Systems oriented to the high-level synthesis (HLS) of VLSI circuits, superscalar processors and very long instruction word (VLIW) processors exploit fine-grained parallelism at instruction level. This work addresses fine-grained parallelization of loops addressed to the HLS of VLSI circuits. Two algorithms are proposed for resource constraints and for timing constraints. An algorithm to reduce the number of registers required to execute a loop in a given architecture is also proposed

    Mascot: Microarchitecture Synthesis of Control Paths

    Get PDF
    This paper presents MASCOT (MicroArchitecture Synthesis of ConTrol paths). This synthesis system constructs the optimal microarchitecture for a control path of an instruction set processor. Input to the system is the behavioural specification of a control path. This specification is in finite state machine form which is mapped initially onto a single programmed logic array (PLA) microarchitecture. The synthesis strategy then applies a sequence of decompositions on this initial microarchitecture. This strategy follows a decision scheme until all design objectives are met. It transforms the initial microarchitecture into a complex microarchitecture of several PLAs and ROMs. Where it is impossible to meet the design objectives, the system constructs a microarchitecture which comes as close as possible to given design objectives. Design objectives are allowed on floorplan dimensions and delay. Our strategy integrates a number of known optimization methods for specific microarchitectures. Therefore this synthesis method explores a larger part of the design space than do other control path synthesis methods. Other methods are mostly bound to one microarchitecture which they optimize. Our system is not only very flexible in microarchitecture construction but also open for extension by other optimizations
    • 

    corecore