2,111 research outputs found

    A review of High Performance Computing foundations for scientists

    Full text link
    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.Comment: 33 page

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    World Wide Web Robot for Extreme Datamining with Swiss-Tx Supercomputers

    Get PDF
    This paper discusses the software and hardware issues of designing a highly parallel robot for extreme datamining on the Internet. As a sample application, a World Wide Web server count experiment for Switzerland and Thailand is presented. Our platform of choice is the SwissTx, a supercomputer built from commodity components that runs NT and COMPAQ tru64 UNIX. Hardware and software of this machine are discussed and benchmark results presented. They show that NT is a feasible choice even under the given extreme conditions. Using statistical modelling for optimizing the search process, the inevitable bandwidth problem is reduced to some extent to a computation problem. We suggest that our approach to Web robots is a robust bet for a multitude of future Internet applications which might lead to a large-scale and cost-efficient usage of Web robots
    corecore