10 research outputs found

    GRID RESOURCE REGISTRY – ABSTRACT LAYER FOR COMPUTATIONAL RESOURCES

    Get PDF
    The growing number of resources available to researchers in the e-Science domain has openednew possibilities for constructing complex scientific applications while at the same timeintroducing new requirements for tools which assist developers in creating such applications.This paper discusses the problems of rapid application development, the use of distributedresources and a uniform approach to resource registration, discovery and access. It presentsthe Grid Resource Registry, which delivers an abstract layer for computational resources.The Registry is a central place where developers may search for available services and fromwhich the execution engine receives technical specifications of services. The Registry is usedthroughout the lifetime of the e-science application, starting with application design, throughimplementation to execution

    A Platform for Collaborative e-Science Applications

    Get PDF
    Abstract A novel, holistic, approach to scientific investigations should, besides analysis of individual phenomena, integrate different, interdisciplinary sources of knowledge about a complex system to obtain a deep understanding of the system as a whole. This innovative way of research, recently called system-level science [1], requires advanced software environments to support collaborating research groups. Most problem-solving environments and virtual laboratories In the ViroLab project The Virtual Laboratory (see The Experiment Planning Environment supports rapid experiment plan development while the Experiment Management Interface enables loading and execution of experiments. The Experiment Repository developers and published for future use. The virtual laboratory engi Operation Invoker which instantiates grid object repr operation invocations. The GridSpace Applic load balancing on computational servers. The Data Access Service remote databases located in research institutions and Fig. 1. Architecture of the Virtual Laboratory The provenance approach in the ViroLab virtual laboratory ontology-based semantic modeling, monitoring of infrastructure, and database technologies, in order to coll the execution of experiments, represent it in a meaningful way, repository. In the ViroLab project, this virtual laboratory is used to plan and virological experiments, with various types of analysis of as the calculation of drug resistance, querying historical and about experiments, a drug resistance system based on the Retrogram been applied to other application domains, such as comparison, data mining using the Weka library, series of Gaussian application on the EGEE infrastructure. computer science classes. We have developed an environment for collaborative planning, execution of e-Science applications. It facilitates fast, close cooperation and users so it may be used by groups of experts running In-silico experiments undergo frequent changes, this platform encourages quick, agile simulation software releasing

    Grid resource registry - abstract layer for computational resources Grid resource registry - zunifikowany dostęp do zasobów obliczeniowych /

    No full text
    Tyt. z nagłówka.Bibliogr. s. 43-44.Rosnąca liczba zasobów dostępnych dla naukowca z jednej strony otworzyła nowe możliwości w konstruowaniu złożonych aplikacji naukowych, a z drugiej przyniosła dodatkowe wymagania dla narzędzi wspierających proces tworzenia oraz uruchamiania takich aplikacji. W artykule przedstawiono wyzwania związane z szybkim wytwarzaniem aplikacji naukowych, które wykorzystują rozproszone zasoby oraz związane z nimi trudności wynikające z rejestrowania, wyszukiwania i wywoływania zasobów używanych przez aplikację. Rozważania przedstawiono na przykładzie Grid Resource Registry - centralnego rejestru, który dostarcza abstrakcyjnego opisu rozproszonych zasobów, dzięki czemu w znaczący sposób proces wytwarzania oraz uruchamiania aplikacji naukowych może zostać uproszczony.The growing number of resources available to researchers in the e-Science domain has opened new possibilities for constructing complex scientific applications while at the same time introducing new requirements for tools which assist developers in creating such applications. This paper discusses the problems of rapid application development, the use of distributed resources and a uniform approach to resource registration, discovery and access. It presents the Grid Resource Registry, which delivers an abstract layer for computational resources. The Registry is a central place where developers may search for available services and from which the execution engine receives technical specifications of services. The Registry is used throughout the lifetime ofthe e-science application, starting with application design, through implementation to execution.Dostępny również w formie drukowanej.SŁOWA KLUCZOWE: e-science, kolaboratywne tworzenie aplikacji, aplikacje rozproszone, odnajdywanie zasobow, rejestr, wspólna przestrzeń danych KEYWORDS: e-science, collaborative applications, distributed applications, resource discovery, registry, common information space

    Grid Resource Registry – Abstract Layer For Computational Resources

    No full text
    The growing number of resources available to researchers in the e-Science domain has openednew possibilities for constructing complex scientific applications while at the same timeintroducing new requirements for tools which assist developers in creating such applications.This paper discusses the problems of rapid application development, the use of distributedresources and a uniform approach to resource registration, discovery and access. It presentsthe Grid Resource Registry, which delivers an abstract layer for computational resources.The Registry is a central place where developers may search for available services and fromwhich the execution engine receives technical specifications of services. The Registry is usedthroughout the lifetime of the e-science application, starting with application design, throughimplementation to execution

    Effect of particularisation size on the accuracy and efficiency of a multiscale tumours' growth model

    Get PDF
    In silico, medicine models are frequently used to represent a phenomenon across multiples space-time scales. Most of these multiscale models require impracticable execution times to be solved, even using high performance computing systems, because typically each representative volume element in the upper scale model is coupled to an instance of the lower scale model; this causes a combinatory explosion of the computational cost, which increases exponentially as the number of scales to be modelled increases. To attenuate this problem, it is a common practice to interpose between the two models a particularisation operator, which maps the upper-scale model results into a smaller number of lower-scale models, and an operator, which maps the fewer results of the lower-scale models on the whole space-time homogenisation domain of upper-scale model. The aim of this study is to explore what is the simplest particularisation / homogenisation scheme that can couple a model aimed to predict the growth of a whole solid tumour (neuroblastoma) to a tissue-scale model of the cell-tissue biology with an acceptable approximation error and a viable computational cost. Using an idealised initial dataset with spatial gradients representative of those of real neuroblastomas, but small enough to be solved without any particularisation, we determined the approximation error and the computational cost of a very simple particularisation strategy based on binning. We found that even such simple algorithm can significantly reduce the computational cost with negligible approximation errors

    Digital Twin Simulation Development and Execution on HPC Infrastructures

    No full text
    The Digital Twin paradigm in medical care has recently gained popularity among proponents of translational medicine, to enable clinicians to make informed choices regarding treatment on the basis of digital simulations. In this paper we present an overview of functional and non-functional requirements related to specific IT solutions which enable such simulations - including the need to ensure repeatability and traceability of results - and propose an architecture that satisfies these requirements. We then describe a computational platform that facilitates digital twin simulations, and validate our approach in the context of a real-life medical use case: the BoneStrength application

    Collaborative Virtual Laboratory for e-Health

    No full text
    Abstract: This paper describes the Virtual Laboratory for e-Health system which i

    Dedicated IT infrastructure for Smart Levee Monitoring and Flood Decision Support

    No full text
    Smart levees are being increasingly investigated as a flood protection technology. However, in large-scale emergency situations, a flood decision support system may need to collect and process data from hundreds of kilometers of smart levees; such a scenario requires a resilient and scalable IT infrastructure, capable of providing urgent computing services in order to perform frequent data analyses required in decision making, and deliver their results in a timely fashion. We present the ISMOP IT infrastructure for smart levee monitoring, designed to support decision making in large-scale emergency situations. Most existing approaches to urgent computing services in decision support systems dealing with natural disasters focus on delivering quality of service for individual, isolated subsystems of the IT infrastructure (such as computing, storage, or data transmission). We propose a holistic approach to dynamic system management during both urgent (emergency) and normal (non-emergency) operation. In this approach, we introduce a Holistic Computing Controller which calculates and deploys a globally optimal configuration for the entire IT infrastructure, based on cost-of-operation and quality-of-service (QoS) requirements of individual IT subsystems, expressed in the form of Service Level Agreements (SLAs). Our approach leads to improved configuration settings and, consequently, better fulfilment of the system’s cost and QoS requirements than would have otherwise been possible had the configuration of all subsystems been managed in isolation

    Dedicated IT infrastructure for Smart Levee Monitoring and Flood Decision Support

    No full text
    Smart levees are being increasingly investigated as a flood protection technology. However, in large-scale emergency situations, a flood decision support system may need to collect and process data from hundreds of kilometers of smart levees; such a scenario requires a resilient and scalable IT infrastructure, capable of providing urgent computing services in order to perform frequent data analyses required in decision making, and deliver their results in a timely fashion. We present the ISMOP IT infrastructure for smart levee monitoring, designed to support decision making in large-scale emergency situations. Most existing approaches to urgent computing services in decision support systems dealing with natural disasters focus on delivering quality of service for individual, isolated subsystems of the IT infrastructure (such as computing, storage, or data transmission). We propose a holistic approach to dynamic system management during both urgent (emergency) and normal (non-emergency) operation. In this approach, we introduce a Holistic Computing Controller which calculates and deploys a globally optimal configuration for the entire IT infrastructure, based on cost-of-operation and quality-of-service (QoS) requirements of individual IT subsystems, expressed in the form of Service Level Agreements (SLAs). Our approach leads to improved configuration settings and, consequently, better fulfilment of the system’s cost and QoS requirements than would have otherwise been possible had the configuration of all subsystems been managed in isolation
    corecore