28 research outputs found

    Towards a lightweight generic computational grid framework for biological research

    Get PDF
    Background: An increasing number of scientific research projects require access to large-scale computational resources. This is particularly true in the biological field, whether to facilitate the analysis of large high-throughput data sets, or to perform large numbers of complex simulations – a characteristic of the emerging field of systems biology. Results: In this paper we present a lightweight generic framework for combining disparate computational resources at multiple sites (ranging from local computers and clusters to established national Grid services). A detailed guide describing how to set up the framework is available from the following URL: http://igrid-ext.cryst.bbk.ac.uk/portal_guide/. Conclusion: This approach is particularly (but not exclusively) appropriate for large-scale biology projects with multiple collaborators working at different national or international sites. The framework is relatively easy to set up, hides the complexity of Grid middleware from the user, and provides access to resources through a single, uniform interface. It has been developed as part of the European ImmunoGrid project

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Enhancing e-Infrastructures with Advanced Technical Computing: Parallel MATLABÂź on the Grid

    Get PDF
    MATLABÂź is widely used within the engineering and scientific fields as the language and environment for technical computing, while collaborative Grid computing on e-Infrastructures is used by scientific communities to deliver a faster time to solution. MATLAB allows users to express parallelism in their applications, and then execute code on multiprocessor environments such as large-scale e-Infrastructures. This paper demonstrates the integration of MATLAB and Grid technology with a representative implementation that uses gLite middleware to run parallel programs. Experimental results highlight the increases in productivity and performance that users obtain with MATLAB parallel computing on Grids

    Putting the User at the Centre of the Grid: Simplifying Usability and Resource Selection for High Performance Computing

    Get PDF
    Computer simulation is finding a role in an increasing number of scientific disciplines, concomitant with the rise in available computing power. Realizing this inevitably re- quires access to computational power beyond the desktop, making use of clusters, supercomputers, data repositories, networks and distributed aggregations of these re- sources. Accessing one such resource entails a number of usability and security prob- lems; when multiple geographically distributed resources are involved, the difficulty is compounded. However, usability is an all too often neglected aspect of computing on e-infrastructures, although it is one of the principal factors militating against the widespread uptake of distributed computing. The usability problems are twofold: the user needs to know how to execute the applications they need to use on a particular resource, and also to gain access to suit- able resources to run their workloads as they need them. In this thesis we present our solutions to these two problems. Firstly we propose a new model of e-infrastructure resource interaction, which we call the user–application interaction model, designed to simplify executing application on high performance computing resources. We describe the implementation of this model in the Application Hosting Environment, which pro- vides a Software as a Service layer on top of distributed e-infrastructure resources. We compare the usability of our system with commonly deployed middleware tools using five usability metrics. Our middleware and security solutions are judged to be more usable than other commonly deployed middleware tools. We go on to describe the requirements for a resource trading platform that allows users to purchase access to resources within a distributed e-infrastructure. We present the implementation of this Resource Allocation Market Place as a distributed multi- agent system, and show how it provides a highly flexible, efficient tool to schedule workflows across high performance computing resources

    Planck early results V : The Low Frequency Instrument data processing

    Get PDF
    Peer reviewe

    Planck early results. V. The low frequency instrument data processing

    Get PDF
    We describe the processing of data from the Low Frequency Instrument (LFI) used in production of the Planck Early Release Compact Source Catalogue (ERCSC). In particular, we discuss the steps involved in reducing the data from telemetry packets to cleaned, calibrated, time-ordered data (TOD) and frequency maps. Data are continuously calibrated using the modulation of the temperature of the cosmic microwave background radiation induced by the motion of the spacecraft. Noise properties are estimated from TOD from which the sky signal has been removed using a generalized least square map-making algorithm. Measured 1/f noise knee-frequencies range from ~100 mHz at 30 GHz to a few tens of mHz at 70GHz. A destriping code (Madam) is employed to combine radiometric data and pointing information into sky maps, minimizing the variance of correlated noise. Noise covariance matrices required to compute statistical uncertainties on LFI and Planck products are also produced. Main beams are estimated down to the ??10dB level using Jupiter transits, which are also used for geometrical calibration of the focal plane.Planck is too large a project to allow full acknowledgement of all contributions by individuals, institutions, industries, and funding agencies. The main entities involved in the mission operations are as follows. The European Space Agency operates the satellite via its Mission Operations Centre located at ESOC (Darmstadt, Germany) and coordinates scientific operations via the Planck Science Office located at ESAC (Madrid, Spain). Two Consortia, comprising around 50 scientific institutes within Europe, the USA, and Canada, and funded by agencies from the participating countries, developed the scientific instruments LFI and HFI, and continue to operate them via Instrument Operations Teams located in Trieste (Italy) and Orsay (France). The Consortia are also responsible for scientific processing of the acquired data. The Consortia are led by the Principal Investigators: J.L. Puget in France for HFI (funded principally by CNES and CNRS/INSU-IN2P3) and N. Mandolesi in Italy for LFI(funded principally via ASI). NASA US Planck Project, based at J.P.L. and involving scientists at many US institutions, contributes significantly to the efforts of these two Consortia. The author list for this paper has been selected by the Planck Science Team, and is composed of individuals from all of the above entities who have made multi-year contributions to the development of the mission. It does not pretend to be inclusive of all contributions. The Planck-LFI project is developed by an International Consortium lead by Italy and involving Canada, Finland, Germany, Norway, Spain, Switzerland, UK, USA. The Italian contribution to Planck is supported by the Italian Space Agency (ASI) and INAF. This work was supported by the Academy of Finland grants 121703 and 121962. We thank the DEISA Consortium (http://www.deisa.eu), co-funded through the EU FP6 project RI-031513 and the FP7 project RI-222919, for support within the DEISA Virtual Community Support Initiative. We thank CSC – IT Center for Science Ltd (Finland) for computational resources. We acknowledge financial support provided by the Spanish Ministerio de Ciencia e InnovaciĂ”n through the Plan Nacional del Espacio y Plan Nacional de Astronomia y Astrofisica. We acknowledge The Max Planck Institute for Astrophysics Planck Analysis Centre (MPAC) is funded by the Space Agency of the German Aerospace Center (DLR) under grant 50OP0901 with resources of the German Federal Ministry of Economics and Technology, and by the Max Planck Society. This work has made use of the Planck satellite simulation package (Level-S), which is assembled by the Max Planck Institute for Astrophysics Planck Analysis Centre (MPAC) Reinecke et al. (2006). We acknowledge financial support provided by the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Some of the results in this paper have been derived using the HEALPix package GĂłrski et al. (2005). A description of the Planck Collaboration and a list of its members, indicating which technical or scientific activities they have been involved in, can be found at http://www.rssd.esa.int/index.php?project=PLANCK&page=Planck_Collaboration
    corecore