273 research outputs found

    Performance of scientific processing in networks of workstations: matrix multiplication example

    Get PDF
    Parallel computing on networks of workstations are intensively used in some application areas such as linear algebra operations. Topics such as processing as well as communication hardware heterogeneity are considered solved by the use of parallel processing libraries, but experimentation about performance under these circumstances seems to be necessary. Also, installed networks of workstations are specially attractive due to its extremely low cost for parallel processing as well as its great availability given the number of installed local area networks. The performance of such networks of workstations is fully analyzed by means of a simple application: matrix multiplication. A parallel algorithm is proposed for matrix multiplication derived from two main sources: a) previous proposed algorithms for this task in traditional parallel computers, and b) the bus based interconnection network of workstations. This parallel algorithm is analyzed experimentally in terms of workstations workload and data communication, two main factors in overall parallel computing performance

    Performance of scientific processing in networks of workstations

    Get PDF
    The growing processing power of standard workstations, along with the relatively easy way in which they can be available for parallel processing, have both contributed to their increasing use in computation intensive application areas. Usually, computation intensive areas have been referred to as scientific processing; one of them being linear algebra, where a great effort has been made to optimize solution methods for serial as well as for parallel computing.\nSince the appearance of software libraries for parallel environments such as PVM (Parallel Virtual Machine) [4] and implementations of MPI (Message Passing Interface) [5], the distributed processing power of networks of workstations has been available for parallel processing as well.\nAlso, a strong emphasis has been made on the heterogeneous computing facility provided by these libraries over networks of workstations. However, there is a lack of published results on the performance obtained on this kind of parallel (more specifically distributed) processing architectures.\nFrom the whole area of linear algebra applications, the most challenging (in terms of performance) operations to be solved are the so called Level 3 BLAS (Basic Linear Algebra Subprograms). In Level 3 BLAS, all of the processing can be expressed (and solved) in terms of matrix-matrix operations. Even more specifically, the most studied operation has been matrix multiplication, which is in fact a benchmark in this application area.Eje: Procesamiento Concurrente, paralelo y distribuido. Rede

    Parallel Computing in Local Area Networks

    Get PDF
    In this thesis, parallel computing on installed local area networks (LAN) is focused, analyzing problems and possible solutions taking into account the main factors of computing and communications. More specifically, LAN are characterized as parallel computers in the context of linear algebra applications, proposing parallelization guidelines which are: specific for parallel computing on LAN, and simple enough to be applied to a wide range of problemsFacultad de Inform谩tic

    Parallel Computing in Local Area Networks

    Get PDF
    In this thesis, parallel computing on installed local area networks (LAN) is focused, analyzing problems and possible solutions taking into account the main factors of computing and communications. More specifically, LAN are characterized as parallel computers in the context of linear algebra applications, proposing parallelization guidelines which are: specific for parallel computing on LAN, and simple enough to be applied to a wide range of problem

    Performance of scientific processing in networks of workstations

    Get PDF
    The growing processing power of standard workstations, along with the relatively easy way in which they can be available for parallel processing, have both contributed to their increasing use in computation intensive application areas. Usually, computation intensive areas have been referred to as scientific processing; one of them being linear algebra, where a great effort has been made to optimize solution methods for serial as well as for parallel computing. Since the appearance of software libraries for parallel environments such as PVM (Parallel Virtual Machine) [4] and implementations of MPI (Message Passing Interface) [5], the distributed processing power of networks of workstations has been available for parallel processing as well. Also, a strong emphasis has been made on the heterogeneous computing facility provided by these libraries over networks of workstations. However, there is a lack of published results on the performance obtained on this kind of parallel (more specifically distributed) processing architectures. From the whole area of linear algebra applications, the most challenging (in terms of performance) operations to be solved are the so called Level 3 BLAS (Basic Linear Algebra Subprograms). In Level 3 BLAS, all of the processing can be expressed (and solved) in terms of matrix-matrix operations. Even more specifically, the most studied operation has been matrix multiplication, which is in fact a benchmark in this application area.Eje: Procesamiento Concurrente, paralelo y distribuido. RedesRed de Universidades con Carreras en Inform谩tica (RedUNCI

    Computer Architecture: A Quantitative Approach J. L. Hennessy, D. A. Patterson Morgan Kaufman, 4th Edition, 2007

    Get PDF
    An updated edition of the classic book on computer architecture by J. L. Hennessy and D. A. Patterson.\nAuthors show their high level standards for technological ideas, writing style, and teaching methodologies, all in one book. In fact, they maintain their quality since the first edition of this book. As authors explain, one of the main reasons for the fourth edition of the book is the focus on parallel architectures for high performance, more specifically: multiple processors or processors cores per chip designs

    Principles of distributed database systems, third edition : Tamer 脰zsu, Patrick Valduriez; Springer 鈥 2011; ISBN 978-1-4419-8833-1

    Get PDF
    We could consider this book as a classical one from many points of view. To start with, the book is the third in a series of editions since the first one more than 20 years ago, thus containing the evolution in the field of distributed databases as well as the basic concepts which are not strongly affected by technology evolution. The thorough review of relational databases as well as a discussion of several aspects of distributed systems (as computer networks) in Chapter 2 makes the book highly self-contained, enhancing reading and understanding underlying principles. The book covers all the problems expected to be encountered in distributed databases. (P谩rrafo extra铆do del texto a modo de resumen)Facultad de Inform谩tic

    Computer Architecture: A Quantitative Approach J. L. Hennessy, D. A. Patterson Morgan Kaufman, 4th Edition, 2007

    Get PDF
    An updated edition of the classic book on computer architecture by J. L. Hennessy and D. A. Patterson. Authors show their high level standards for technological ideas, writing style, and teaching methodologies, all in one book. In fact, they maintain their quality since the first edition of this book. As authors explain, one of the main reasons for the fourth edition of the book is the focus on parallel architectures for high performance, more specifically: multiple processors or processors cores per chip designs.Facultad de Inform谩tic

    Performance of scientific processing in networks of workstations: matrix multiplication example

    Get PDF
    Parallel computing on networks of workstations are intensively used in some application areas such as linear algebra operations. Topics such as processing as well as communication hardware heterogeneity are considered solved by the use of parallel processing libraries, but experimentation about performance under these circumstances seems to be necessary. Also, installed networks of workstations are specially attractive due to its extremely low cost for parallel processing as well as its great availability given the number of installed local area networks. The performance of such networks of workstations is fully analyzed by means of a simple application: matrix multiplication. A parallel algorithm is proposed for matrix multiplication derived from two main sources: a) previous proposed algorithms for this task in traditional parallel computers, and b) the bus based interconnection network of workstations. This parallel algorithm is analyzed experimentally in terms of workstations workload and data communication, two main factors in overall parallel computing performance.Facultad de Inform谩tic
    corecore