175 research outputs found

    Big data challenges arising from future experiments

    Get PDF
    Future physics experiments and observatories rely on the capabilitiesto process significantly larger data streams. Examples are futurelight-source experiments, elementary particle experiments andradio-astronomy observatories. All have in common that they planto exploit high-performance compute capabilities instead of relyingon hardware controlled data processing. This approach increasesflexibility during the lifetime of such experiments and may increasethe use of commodity hardware, which is typically cheaper compared tocustom solutions. While these experiments can thus benefit from HPCarchitectures and technologies, both as being available today as wellas planned on future roadmaps, the requirements and use models differsignificantly from today's high-performance computing. In this talkwe will analyse the requirements of a few examples and discuss how theywill benefit from current as well as future HPC technologies

    Realising active storage concepts for today's and future HPC systems

    Get PDF
    With computing performance improving at a faster path than the performance of the I/O subsystem, adoption of new storage concepts will be necessary on the path towards exascale systems. Both architectural and technical opportunities have to be explored. In this talk we will consider on the one hand the architectural concept of active storage and on the other hand advances in non-volatile memory technologies. Specifically, we will report on how both are used in the Blue Gene Active Storage (BGAS) architecture. We will give an overview on the architecture and explore use concepts. Finally, we discuss how this design can be extended to future architectures

    <x>_{u-d} from lattice QCD at nearly physical quark masses

    Full text link
    We determine the second Mellin moment of the isovector quark parton distribution function _{u-d} from lattice QCD with N_f=2 sea quark flavours, employing the non-perturbatively improved Wilson-Sheikholeslami-Wohlert action at a pseudoscalar mass of 157(6) MeV. The result is converted non-perturbatively to the RI'-MOM scheme and then perturbatively to the MSbar scheme at a scale mu = 2 GeV. As the quark mass is reduced we find the lattice prediction to approach the value extracted from experiments.Comment: 4 pages, 3 figures, v2: minor updates including journal ref

    Collecting Use Cases Information: Method and Template for Physical Science for the OpenPOWER Foundation

    Get PDF
    The main purpose of this Note is to define a method and a template to collect Physical Science use cases from scientists and research engineers working on Physical Science projects in the context and within the scope of the OpenPOWER Foundation for Physical Science Workgroup. An effective method (shared between all stakeholders) could contribute to • understand the workflow, starting with user expectations; • help Physical Science projects maintain costs within a chosen envelope; • map the functionalities to the scientific requirements; • remove possible misunderstanding between the scientific community and ICT stakeholders

    Security in an evolving European HPC Ecosystem

    Get PDF
    The goal of this technical report is to analyse challenges and requirements related to security in the context of an evolving European HPC ecosystem, to provide selected strategies on how to address them, and to come up with a set of forward-looking recommendations. A key assumption made in this technical report is that we are in a transition period from a setup, where HPC resources are operated in a rather independent manner, to centres providing a variety of e-infrastructure services, which are not exclusively based on HPC resources and are increasingly part of federated infrastructures

    Nucleon distribution amplitudes from lattice QCD

    Get PDF
    We calculate low moments of the leading-twist and next-to-leading twist nucleon distribution amplitudes on the lattice using two flavors of clover fermions. The results are presented in the MSbar scheme at a scale of 2 GeV and can be immediately applied in phenomenological studies. We find that the deviation of the leading-twist nucleon distribution amplitude from its asymptotic form is less pronounced than sometimes claimed in the literature.Comment: 5 pages, 3 figures, 2 tables. RevTeX style. Normalization for \lambda_i corrected. Discussion of the results extended. To be published in PR

    HPC for Urgent Decision-Making

    Get PDF
    Emerging use cases from incident response planning and broad-scope European initiatives (e.g. Destination Earth [1,2], European Green Deal and Digital Package [21]) are expected to require federated, distributed infrastructures combining computing and data platforms. These will provide elasticity enabling users to build applications and integrate data for thematic specialisation and decision support, within ever shortening response time windows. For prompt and, in particular, for urgent decision support, the conventional usage modes of HPC centres is not adequate: these rely on relatively long-term arrangements for time-scheduled exclusive use of HPC resources, and enforce well- established yet time-consuming policies for granting access. In urgent decision support scenarios, managers or members of incident response teams must initiate processing and control the resources required based on their real-time judgement on how a complex situation evolves over time. This circle of clients is distinct from the regular users of HPC centres, and they must interact with HPC workflows on-demand and in real-time, while engaging significant HPC and data processing resources in or across HPC centres. This white paper considers the technical implications of supporting urgent decisions through establishing flexible usage modes for computing, analytics and AI/ML-based applications using HPC and large, dynamic assets. The target decision support use cases will involve ensembles of jobs, data-staging to support workflows, and interactions with services/facilities external to HPC systems/centres. Our analysis identifies the need for flexible and interactive access to HPC resources, particularly in the context of dynamic workflows processing large datasets. This poses several technical and organisational challenges: short-notice secure access to HPC and data resources, dynamic resource allocation and scheduling, coordination of resource managers, support for data-intensive workflow (including data staging on node-local storage), preemption of already running workloads and interactive steering of simulations. Federation of services and resources across multiple sites will help to increase availability, provide elasticity for time-varying resource needs and enable leverage of data locality
    • …
    corecore