22 research outputs found

    FiVO/QStorMan Semantic Toolkit for Supporting Data-Intensive Applications in Distributed Environments

    Get PDF
    In this paper we present a semantic-based approach for supporting data-intensive applications in distributed environments. The approach is characterized by usage of explicit definition of non-functional quality parameters regarding storage systems, semantic descriptions of the available storage infrastructre and monitoring data concering the infrastructure workload and users operation, along with an implementation of the approach in the form of a toolkit called FiVO/QStorMan. In particular, we describe semantic descriptions, which are exploited in the storage resource provisioning process. In addition, the paper describes results of the performed experimental evaluation of the toolkit, which confirm the effectiveness of the proposed approach for the storage resource provisioning

    Towards Exascale Computing Architecture and Its Prototype: Services and Infrastructure

    Get PDF
    This paper presents the design and implementation of a scalable compute platform for processing large data sets in the scope of the EU H2020 project PROCESS. We are presenting requirements of the platform, related works, infrastructure with focus on the compute components and finally results of our work

    Policy-based SLA storage management model for distributed data storage services

    Get PDF
    There is  high demand for storage related services supporting scientists in their research activities. Those services are expected to provide not only capacity but also features allowing for more flexible and cost efficient usage. Such features include easy multiplatform data access, long term data retention, support for performance and cost differentiating of SLA restricted data access. The paper presents a policy-based SLA storage management model for distributed data storage services. The model allows for automated management of distributed data aimed at QoS provisioning with no strict resource reservation. The problem of providing  users with the required QoS requirements is complex, and therefore the model implements heuristic approach  for solving it. The corresponding system architecture, metrics and methods for SLA focused storage management are developed and tested in a real, nationwide environment

    Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Full text link
    The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb) fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB) implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH

    Analysis of the Basic Implementation Aspects of Hardware-Accelerated Density Functional Theory Calculations

    Get PDF
    This paper presents a Field Programmable Gate Array (FPGA) implementation of a calculation module for exponential part of Gaussian Type Orbital (GTO). The module is composed of several specially crafted floating-point modules which are fully pipelined and optimized for high performance. The hardware implementation revealed significant speed-up for the finite sum of the exponential products calculation ranging from 2.5x to 20x in comparison to a general-purpose Central Processing Unit (CPU) version. Calculating values of GTOs is one of computationally critical parts of the Kohn-Sham algorithm. The approach proposed in the paper aims to increase the performance of a part of the quantum chemistry computational system by employing FPGA-based accelerator. Several issues are addressed, such as identification of code fragments which benefit most from hardware acceleration, porting a part of the Kohn-Sham algorithm to FPGA, data precision adjustment and data transfer overhead. The authors' intention was also to make hardware implementation of calculating the orbital function universal and easily attachable to different quantum-chemistry software packages

    THE ATLAS EXPERIMENT ON-LINE MONITORING AND FILTERING AS AN EXAMPLE OF REAL-TIME APPLICATION

    Get PDF
    The ATLAS detector, recording LHC particles’ interactions, produces events with rate of40 MHz and size of 1.6 MB. The processes with new and interesting physics phenomena arevery rare, thus an efficient on-line filtering system (trigger) is necessary. The asynchronouspart of that system relays on few thousands of computing nodes running the filtering software.Applying refined filtering criteria results in increase of processing times what may lead tolack of processing resources installed on CERN site. We propose extension to this part ofthe system based on submission of the real-time filtering tasks into the Grid

    PROCESS Data Infrastructure and Data Services

    Get PDF
    Due to energy limitation and high operational costs, it is likely that exascale computing will not be achieved by one or two datacentres but will require many more. A simple calculation, which aggregates the computation power of the 2017 Top500 supercomputers, can only reach 418 petaflops. Companies like Rescale, which claims 1.4 exaflops of peak computing power, describes its infrastructure as composed of 8 million servers spread across 30 datacentres. Any proposed solution to address exascale computing challenges has to take into consideration these facts and by design should aim to support the use of geographically distributed and likely independent datacentres. It should also consider, whenever possible, the co-allocation of the storage with the computation as it would take 3 years to transfer 1 exabyte on a dedicated 100 Gb Ethernet connection. This means we have to be smart about managing data more and more geographically dispersed and spread across different administrative domains. As the natural settings of the PROCESS project is to operate within the European Research Infrastructure and serve the European research communities facing exascale challenges, it is important that PROCESS architecture and solutions are well positioned within the European computing and data management landscape namely PRACE, EGI, and EUDAT. In this paper we propose a scalable and programmable data infrastructure that is easy to deploy and can be tuned to support various data-intensive scientific applications

    The Interactive European Grid: Project Objectives and Achievements

    Get PDF
    The Interactive European Grid (i2g) project has set up an advanced e-Infrastructure in the European Research Area specifically oriented to support the friendly execution of demanding interactive applications. While interoperable with existing large e-Infrastructures like EGEE, i2g software supports execution of parallel applications in interactive mode including powerful visualization and application steering. This article describes the strategy followed, the key technical achievements, examples of applications that benefit from this infrastructure and the sustainable model proposed for the future

    Towards Trasparent Data Access with Context Awareness

    Get PDF
    Applying the principles of open research data is an important factor accelerating the production, analysis of scientific results and worldwide collaboration. However, still very little data is being shared. The aim of this article is analysis of existing data access solutions in order to identify reasons for such situation. After analysis of existing solutions and data access stakeholders needs, the authors propose own vision of data access model evolution

    Using Unused: Non-Invasive Dynamic FaaS Infrastructure with HPC-Whisk

    Full text link
    Modern HPC workload managers and their careful tuning contribute to the high utilization of HPC clusters. However, due to inevitable uncertainty it is impossible to completely avoid node idleness. Although such idle slots are usually too short for any HPC job, they are too long to ignore them. Function-as-a-Service (FaaS) paradigm promisingly fills this gap, and can be a good match, as typical FaaS functions last seconds, not hours. Here we show how to build a FaaS infrastructure on idle nodes in an HPC cluster in such a way that it does not affect the performance of the HPC jobs significantly. We dynamically adapt to a changing set of idle physical machines, by integrating open-source software Slurm and OpenWhisk. We designed and implemented a prototype solution that allowed us to cover up to 90\% of the idle time slots on a 50k-core cluster that runs production workloads
    corecore