1,198 research outputs found

    Tree-Searching Algorithms on Parallel Architectures

    Get PDF

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Parameterized complexity of machine scheduling: 15 open problems

    Full text link
    Machine scheduling problems are a long-time key domain of algorithms and complexity research. A novel approach to machine scheduling problems are fixed-parameter algorithms. To stimulate this thriving research direction, we propose 15 open questions in this area whose resolution we expect to lead to the discovery of new approaches and techniques both in scheduling and parameterized complexity theory.Comment: Version accepted to Computers & Operations Researc

    Parallelized reliability estimation of reconfigurable computer networks

    Get PDF
    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance

    Processor allocation for partitionable multiprocessor systems

    Get PDF
    The processor allocation problem in an n-dimensional hypercube multipro-cessor is similar to the conventional memory allocation problem. The main objective is to maximize the utilization of available resources as well as minimize the inherent system fragmentation. In this thesis, a new processor allocation strategy is proposed, and compared with the existing strategies, such as, the Buddy strategy, the Single Gray Code strategy (SGC), the Multiple Gray Code (MGC), and the Maximal Set of Subcubes (MSS). We will show that our proposed processor allocation strategy outperforms the existing strategies, by having the advantage of being able to allocate unused processors to other jobs/algorithms

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    A tabu search heuristic for the Equitable Coloring Problem

    Get PDF
    The Equitable Coloring Problem is a variant of the Graph Coloring Problem where the sizes of two arbitrary color classes differ in at most one unit. This additional condition, called equity constraints, arises naturally in several applications. Due to the hardness of the problem, current exact algorithms can not solve large-sized instances. Such instances must be addressed only via heuristic methods. In this paper we present a tabu search heuristic for the Equitable Coloring Problem. This algorithm is an adaptation of the dynamic TabuCol version of Galinier and Hao. In order to satisfy equity constraints, new local search criteria are given. Computational experiments are carried out in order to find the best combination of parameters involved in the dynamic tenure of the heuristic. Finally, we show the good performance of our heuristic over known benchmark instances

    B-LOG: A branch and bound methodology for the parallel execution of logic programs

    Get PDF
    We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We also propose an implementation scheme based on a database machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller
    • …
    corecore