105 research outputs found

    DIB on the Xerox workstation

    Get PDF
    DIB - A Distributed Implementation of Backtracking is a general-purpose package which allows applications that use tree-traversal algorithms such as backtrack and branch-and-bound to be easily implemented on a multicomputer. The application program needs to specify only the root of the recursion tree, the computation to be performed at each node, and how to generate children at each node. In addition, the application program may optionally specify how to synthesize values of tree nodes from their children\u27s values and how to disseminate information in the tree. DIB uses a distributed algorithm, transparent to the application programmer, that can divide the problem into subproblems and dynamically allocate them to any number of machines. It can also recover from failures of machines. DIB can now run on the Xerox workstation network at Rochester Institute of Technology. Speedup is achievable for exhaustive traversal and branch-and-bound, with only a small fraction of the time is spent in communication

    Towards an abstract parallel branch and bound machine

    Get PDF
    Many (parallel) branch and bound algorithms look very different from each other at first glance. They exploit, however, the same underlying computational model. This phenomenon can be used to define branch and bound algorithms in terms of a set of basic rules that are applied in a specific (predefined) order. In the sequential case, the specification of Mitten's rules turns out to be sufficient for the development of branch and bound algorithms. In the parallel case, the situation is a bit more complicated. We have to consider extra parameters such as work distribution and knowledge sharing. Here, the implementation of parallel branch and bound algorithms can be seen as a tuning of the parameters combined with the specification of Mitten's rules. These observations lead to generic systems, where the user provides the specifications of the problem to be solved, and the system generates a branch and bound algorithm running on a specific architecture. We will discuss some proposals that appeared in the literature. Next, we raise the question whether the proposed models are flexible enough. We analyze the design decisions to be taken when implementing a parallel branch and bound algorithm. It results in a classification model, which is validated by checking whether it captures existing branch and bound implementations. Finally, we return to the issue of flexibility of existing systems, and propose to add an abstract machine model to the generic framework. The model defines a virtual parallel branch and bound machine, within which the design decisions can be expressed in terms of the abstract machine. We will outline some ideas on which the machine may be based, and present directions of future work

    High performance subgraph mining in molecular compounds

    Get PDF
    Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations

    Dynamic load balancing for the distributed mining of molecular structures

    Get PDF
    In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids

    The Ciao clp(FD) library. A modular CLP extension for Prolog

    Get PDF
    We present a new free library for Constraint Logic Programming over Finite Domains, included with the Ciao Prolog system. The library is entirely written in Prolog, leveraging on Ciao's module system and code transformation capabilities in order to achieve a highly modular design without compromising performance. We describe the interface, implementation, and design rationale of each modular component. The library meets several design goals: a high level of modularity, allowing the individual components to be replaced by different versions; highefficiency, being competitive with other TT> implementations; a glass-box approach, so the user can specify new constraints at different levels; and a Prolog implementation, in order to ease the integration with Ciao's code analysis components. The core is built upon two small libraries which implement integer ranges and closures. On top of that, a finite domain variable datatype is defined, taking care of constraint reexecution depending on range changes. These three libraries form what we call the TT> kernel of the library. This TT> kernel is used in turn to implement several higher-level finite domain constraints, specified using indexicals. Together with a labeling module this layer forms what we name the TT> solver. A final level integrates the CLP (J7©) paradigm with our TT> solver. This is achieved using attributed variables and a compiler from the CLP (J7©) language to the set of constraints provided by the solver. It should be noted that the user of the library is encouraged to work in any of those levels as seen convenient: from writing a new range module to enriching the set of TT> constraints by writing new indexicals

    Synthesis of new antenna arrays with arbitrary geometries based on the superformula

    Get PDF
    The synthesis of antenna arrays with low sidelobe levels is needed to enhance the communication systems’ efficiency. In this paper, new arbitrary geometries that improve the ability of the antenna arrays to minimize the sidelobe level, are proposed. We employ the well-known superformula equation in the antenna arrays field by implementing the equation in the general array factor equation. Three metaheuristic optimization algorithms are used to synthesize the antenna arrays and their geometries; antlion optimization (ALO) algorithm, grasshopper optimization algorithm (GOA), and a new hybrid algorithm based on ALO and GOA. All the proposed algorithms are high-performance computational methods, which proved their efficiency for solving different real-world optimization problems. 15 design examples are presented and compared to prove validity with the most general standard geometry: elliptical antenna array (EAA). It is observed that the proposed geometries outperform EAA geometries by 4.5 dB and 10.9 dB in the worst and best scenarios, respectively, which proves the advantage and superiority of our approach

    Towards better algorithms for parallel backtracking

    Get PDF
    Many algorithms in operations research and artificial intelligence are based on depth first search in implicitly defined trees. For parallelizing these algorithms, a load balancing scheme is needed which is able to evenly distribute parts of an irregularly shaped tree over the processors. It should work with minimal interprocessor communication and without prior knowledge of the tree\u27s shape. Previously known load balancing algorithms either require sending a message for each tree node or they only work efficiently for large search trees. This paper introduces new randomized dynamic load balancing algorithms for {\em tree structured computations}, a generalization of backtrack search.These algorithms only need to communicate when necessary and have an asymptotically optimal scalability for many important cases. They work work on hypercubes, butterflies, meshes and many other architectures

    Parallelized reliability estimation of reconfigurable computer networks

    Get PDF
    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance
    • …
    corecore