7 research outputs found

    Influence of Overhead on Processor Allocation for Multiple Loops

    No full text
    We consider two consecutive and independent forall loops and the strategy to allocate processors for their execution. One strategy is to execute each of the two loops consecutively, each time with all the available processors. Another strategy is to execute both loops simultaneously, each with a fraction of the available processors. We verify that the presence of overhead can influence this strategy, since the second strategy implies the use of a smaller number of processors for each individual loop, reducing thus the effect of the overhead. We establish conditions under which the second strategy is better. Finally we consider the special case when there is a single forall loop. We show conditions under which it is more advantageous to split it into two smaller loops and execute them simultaneously, each with a fraction of the available processors

    \Lambda

    No full text
    Influence of overhead on processor allocation for multiple loop

    Generating Parallel Code from High-Level Neural Network Descriptions

    No full text
    Much work has been done in the area of parallel simulation of connectionist systems. However, usually parallel implementation issues for artificial neural networks have been discussed in general terms, but the actual parallel programs implement specific network models and are written in programming languages like C or C++. This paper deals with the transparent parallelization of neural networks. The goal is to automatically derive parallel code for MIMD and SPMD architectures from abstract descriptions of networks. In this, unit parallelism and training set parallelism are discussed. First, an outline of the abstract neural network description language CONNECT is given. The language combines procedural, functional, and object--oriented paradigms and allows for readable and, at the same time, complete definitions of connectionist systems. Currently, C++ code can be generated from CONNECT specifications. The code generation process is explained, and it is shown how unit parallelism can b..

    An environment for knowledge discovery in biology

    No full text
    This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported. © 2003 Elsevier Ltd. All rights reserved

    Junior Barrera, Roberto M Cesar-Jr, Joo E. Ferreira, Marco D. Gubitoso

    No full text
    This paper describes a datamining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. Experimental results obtained by the application of the kernel functions are reported

    Density and diversity of filamentous fungi in the water and sediment of Araçá bay in São Sebastião, São Paulo, Brazil

    No full text
    corecore