529 research outputs found

    AstroGrid-D: Grid Technology for Astronomical Science

    Full text link
    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites, and advanced applications for specific scientific purposes, such as a connection to robotic telescopes. We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.Comment: 14 pages, 12 figures Subjects: data analysis, image processing, robotic telescopes, simulations, grid. Accepted for publication in New Astronom

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    D3.1. Architecture and design of the platform

    Get PDF
    This document aims to establish the requirements and the technological basis and design of the PANACEA platform. These are the main goals of the document: - Survey the different technological approaches that can be used in PANACEA. - Specify some guidelines for the metadata. - Establish the requirements for the platform. - Make a Common Interface proposal for the tools. - Propose a format for the data to be exchanged by the tools (Travelling Object). - Choose the technologies that will be used to develop the platform. - Propose a workplan

    Scientific workflow orchestration interoperating HTC and HPC resources

    Get PDF
    8 páginas, 7 figuras.-- El Pdf del artículo es la versión pre-print.In this work we describe our developments towards the provision of a unified access method to different types of computing infrastructures at the interop- eration level. For that, we have developed a middleware suite which bridges not interoperable middleware stacks used for building distributed computing infrastructues, UNICORE and gLite. Our solution allows to transparently access and operate on HPC and HTC resources from a single interface. Using Kepler as workflow manager, we provide users with the needed integration of codes to create scientific workflows accessing both types of infrastructures.Peer reviewe

    Data intensive scientific analysis with grid computing

    Get PDF
    At the end of September 2009, a new Italian GPS receiver for radio occultation was launched from the Satish Dhawan Space Center (Sriharikota, India) on the Indian Remote Sensing OCEANSAT-2 satellite. The Italian Space Agency has established a set of Italian universities and research centers to implement the overall processing radio occultation chain. After a brief description of the adopted algorithms, which can be used to characterize the temperature, pressure and humidity, the contribution will focus on a method for automatic processing these data, based on the use of a distributed architecture. This paper aims at being a possible application of grid computing for scientific research

    Integrating Existing Software Toolkits into VO System

    Full text link
    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and " Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.Comment: 9 pages, 3 figures, will be published in SPIE 2004 conference proceeding

    Service Based Marketplace for Applications

    Get PDF
    The Grid has revolutionized the way computations are done on the Internet. Access to remote computational resources and ad hoc creation of virtual organizations across administrative domains opens new opportunities on the Grid. The newly developed web services based Open Grid Services Architecture makes the Grid more accessible by allowing the Grid to be constructed from distinct platform independent components. Together they provide an environment for application sharing (or trading), collaborations and access to remote data repositories. The application marketplace is a natural extension to this application sharing environment. The marketplace addresses the fact that the existing infrastructure is still incomplete without provisions for publishing and discovering applications and resources, including the application descriptors that must be moved between the market participants. This work demonstrates a web service instance-based infrastructure, the application market that allows the sellers, the application and the CPU providers to publish their applications for the users to find and use. The application market uses a portal architecture built on top of Globus toolkit 3.0 that interacts with the providers and the users. The market services provide distinct interfaces that allow providers to advertise applications and users to select, configure, and run these applications. The applications themselves are modeled as stateful objects represented using XML which can be exchanged between the providers and users when required. The marketplace, through its interfaces, effectively hides the compute resource and application complexity thus allowing end users to explore and use applications unfamiliar to them with ease
    corecore