32 research outputs found

    論理シミュレーションとハードウェア記述言語に関する研究

    Get PDF
    京都大学0048新制・論文博士工学博士乙第7496号論工博第2471号新制||工||842(附属図書館)UT51-91-E273(主査)教授 矢島 脩三, 教授 津田 孝夫, 教授 田丸 啓吉学位規則第5条第2項該当Kyoto UniversityDFA

    Minimal input support problem and algorithms to solve it

    Get PDF

    Parallelism in declarative languages

    Get PDF
    Imperative programming languages were initially built for uniprocessor systems that evolved out of the Von Neumann machine model. This model of storage oriented computation blocks parallelism and increases the cost of parallel program development and porting. Declarative languages based on mathematical models of computation, seem more suitable for the development of parallel programs. In the first part of this thesis we examine different language families under the declarative paradigm: functional, logic, and constraint languages. Functional languages are based on the abstract model of functions and (lamda)-calculus. They were initially developed for symbolic computation, but today they are commonly used in numerical analysis and many other application areas. Pure lisp is a widely known member of this class. Logic languages are based on first order predicate calculus. Although they were initially developed for theorem proving, fifth generation operating systems are written in them. Most logic languages are descendants or distant relatives of Prolog. Constraint languages are related to logic languages. In a constraint language you define a program object by placing constraints on its structure and its behavior. They were initially used in graphics applications, but today researchers work on using them in parallel computation. Here we will compare and contrast the language classes above, locate advantages and deficiencies, and explain different choices made by language implementors. In the second part of thesis we describe a front end for the CONSUL, a prototype constraint language for programming multiprocessors. The most important features of the front end are compact representation of constraints, type definitions, functional use of relations, and the ability to split programs into multiple files

    A foundation for formal reuse of hardware

    Full text link

    Probabilistic representation and manipulation of Boolean functions using free Boolean diagrams

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 145-149).by Amelia Huimin Shen.Ph.D

    Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers

    Get PDF
    A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic

    Synchronous Programming of Reactive Systems

    Full text link

    Coarse-grained parallel genetic algorithms: Three implementations and their analysis

    Get PDF
    Although solutions to many problems can be found using direct analytical methods such as those calculus provides, many problems simply are too large or too difficult to solve using traditional techniques. Genetic algorithms provide an indirect approach to solving those problems. A genetic algorithm applies biological genetic procedures and principles to a randomly generated collection of potential solutions. The result is the evolution of new and better solutions. Coarse-Grained Parallel Genetic Algorithms extend the basic genetic algorithm by introducing genetic isolation and distribution of the problem domain. This thesis compares the capabilities of a serial genetic algorithm and three coarse-grained parallel genetic algorithms (a standard parallel algorithm, a non-uniform parallel algorithm and an adaptive parallel algorithm). The evaluation is done using an instance of the traveling salesman problem. It is shown that while the standard course-grained parallel algorithm provides more consistent results than the serial genetic algorithm, the adaptive distributed algorithm out-performs them both. To facilitate this analysis, an extensible object-oriented library for genetic algorithms, encompassing both serial and coarse-grained parallel genetic algorithms, was developed. The Java programming language was used throughout

    Memory abstractions for parallel programming

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 156-163).A memory abstraction is an abstraction layer between the program execution and the memory that provides a different "view" of a memory location depending on the execution context in which the memory access is made. Properly designed memory abstractions help ease the task of parallel programming by mitigating the complexity of synchronization or admitting more efficient use of resources. This dissertation describes five memory abstractions for parallel programming: (i) cactus stacks that interoperate with linear stacks, (ii) efficient reducers, (iii) reducer arrays, (iv) ownershipaware transactions, and (v) location-based memory fences. To demonstrate the utility of memory abstractions, my collaborators and I developed Cilk-M, a dynamically multithreaded concurrency platform which embodies the first three memory abstractions. Many dynamic multithreaded concurrency platforms incorporate cactus stacks to support multiple stack views for all the active children simultaneously. The use of cactus stacks, albeit essential, forces concurrency platforms to trade off between performance, memory consumption, and interoperability with serial code due to its incompatibility with linear stacks. This dissertation proposes a new strategy to build a cactus stack using thread-local memory mapping (or TLMM), which enables Cilk-M to satisfy all three criteria simultaneously. A reducer hyperobject allows different branches of a dynamic multithreaded program to maintain coordinated local views of the same nonlocal variable. With reducers, one can use nonlocal variables in a parallel computation without restructuring the code or introducing races. This dissertation introduces memory-mapped reducers, which admits a much more efficient access compared to existing implementations. When used in large quantity, reducers incur unnecessarily high overhead in execution time and space consumption. This dissertation describes support for reducer arrays, which offers the same functionality as an array of reducers with significantly less overhead. Transactional memory is a high-level synchronization mechanism, designed to be easier to use and more composable than fine-grain locking. This dissertation presents ownership-aware transactions, the first transactional memory design that provides provable safety guarantees for "opennested" transactions. On architectures that implement memory models weaker than sequential consistency, programs communicating via shared memory must employ memory-fences to ensure correct execution. This dissertation examines the concept of location-based memoryfences, which unlike traditional memory fences, incurs latency only when synchronization is necessary.by I-Ting Angelina Lee.Ph.D
    corecore