11,252 research outputs found

    Tarmo: A Framework for Parallelized Bounded Model Checking

    Full text link
    This paper investigates approaches to parallelizing Bounded Model Checking (BMC) for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a shared clause database that allows easy clause sharing between SAT solver threads solving various such instances. We present extensive experimental results obtained with multiple variants of our Tarmo implementation. Our shared memory variants have a significantly better performance than conventional single threaded approaches, which is a result that many users can benefit from as multi-core and multi-processor technology is widely available. Furthermore we demonstrate that our framework can be deployed in a typical cluster of workstations, where several multi-core machines are connected by a network

    High performance computing of explicit schemes for electrofusion jointing process based on message-passing paradigm

    Get PDF
    The research focused on heterogeneous cluster workstations comprising of a number of CPUs in single and shared architecture platform. The problem statements under consideration involved one dimensional parabolic equations. The thermal process of electrofusion jointing was also discussed. Numerical schemes of explicit type such as AGE, Brian, and Charlies Methods were employed. The parallelization of these methods were based on the domain decomposition technique. Some parallel performance measurement for these methods were also addressed. Temperature profile of the one dimensional radial model of the electrofusion process were also given

    Process Management in Distributed Operating Systems

    Get PDF
    As part of designing and building the Amoeba distributed operating system, we have come up with a simple set of mechanisms for process management that allows downloading process migration, checkpointing, remote debugging and emulation of alien operating system interfaces.\ud The basic process management facilities are realized by the Amoeba Kernel and can be augmented by user-space services: Debug Service, Load-Balancing Service, Unix-Emulation Service, Checkpoint Service, etc.\ud The Amoeba Kernel can produce a representation of the state of a process which can be given to another Kernel where it is accepted for continued execution. This state consists of the memory contents in the form of a collection of segments, and a Process Descriptor which contains the additional state, program counters, stack pointers, system call state, etc.\ud Careful separation of mechanism and policy has resulted in a compact set of Kernel operations for process creation and management. A collection of user-space services provides process management policies and a simple interface for application programs.\ud In this paper we shall describe the mechanisms as they are being implemented in the Amoeba Distributed System at the Centre for Mathematics and Computer Science in Amsterdam. We believe that the mechanisms described here can also apply to other distributed systems

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Long range science scheduling for the Hubble Space Telescope

    Get PDF
    Observations with NASA's Hubble Space Telescope (HST) are scheduled with the assistance of a long-range scheduling system (SPIKE) that was developed using artificial intelligence techniques. In earlier papers, the system architecture and the constraint representation and propagation mechanisms were described. The development of high-level automated scheduling tools, including tools based on constraint satisfaction techniques and neural networks is described. The performance of these tools in scheduling HST observations is discussed

    The California Cooperative Remote Sensing Project

    Get PDF
    The USDA, the California Department of Water Resources (CDWR), the Remote Sensing Research Program of the University of California (UCB) and NASA have completed a 4-yr cooperative project on the use of remote sensing in monitoring California agriculture. This report is a summary of the project and the final report of NASA's contribution to it. The cooperators developed procedures that combined the use of LANDSAT Multispectral Scanner imagery and digital data with good ground survey data for area estimation and mapping of the major crops in California. An inventory of the Central Valley was conducted as an operational test of the procedures. The satellite and survey data were acquired by USDA and UCB and processed by CDWR and NASA. The inventory was completed on schedule, thus demonstrating the plausibility of the approach, although further development of the data processing system is necessary before it can be used efficiently in an operational environment
    corecore