17 research outputs found

    Proposta per la definizione di INFN CSIRT

    Get PDF
    Il seguente documento definisce l'implementazione generale di INFN CSIRT e le funzioni principali erogate per la propria constituency

    DataTAG Contributing to LCG-0 Pilot Startup

    Get PDF
    The DataTAG project has contributed to the creation of the middleware distribution constituting the base of the LCG-0 pilot. This distribution has demonstrated the possibility of building an EDG release based on iVDGL/VDT, integrating the GLUE schema and early components of the EDG middleware

    General purpose data streaming platform for log analysis, anomaly detection and security protection

    Get PDF
    INFN-CNAF is one of the Worldwide LHC Computing Grid (WLCG) Tier-1 data centres, providing computing, networking and storage resources to a wide variety of scientific collaborations, not limited to the four LHC (Large Hadron Collider) experiments. The INFN-CNAF data centre will move to a new location next year. At the same time, the requirements from our experiments and users are becoming increasingly challenging and new scientific communities have started or will soon start exploiting our resources. Currently, we are reengineering several services, in particular our monitoring infrastructure, in order to improve the day-by-day operations and to cope with the increasing complexity of the use cases and with the future expansion of the centre. This scenario led us to implement a data streaming infrastructure designed to enable log analysis, anomaly detection, threat hunting, integrity monitoring and incident response. Such data streaming platform has been organised to manage different kinds of data coming from heterogeneous sources, to support multi-tenancy and to be scalable. Moreover, we will be able to provide an on demand end-to-end data streaming application to those users/communities requesting such kind of facility. The infrastructure is based on the Apache Kafka platform, which provides streaming of events at large scale, with authorization and authentication configured at the topic level for ensuring data isolation and protection. Data can be consumed by different applications, such as those devoted to log analysis, which provide the capability to index large amounts of data and implement appropriate access policies to inspect and visualise information. In this contribution we will present and motivate our technological choices for the definition of the infrastructure, we will describe its components and we will depict use cases which can be addressed with this platform

    Migrating the INFN-CNAF datacenter to the Bologna Tecnopolo: A status update

    Get PDF
    The INFN Tier1 data center is currently located in the premises of the Physics Department of the University of Bologna, where CNAF is also located. During 2023 it will be moved to the “Tecnopolo”, the new facility for research, innovation, and technological development in the same city area; the same location is also hosting Leonardo, the pre-exascale supercomputing machine managed by CINECA, co-financed as part of the EuroHPC Joint Undertaking, 4th ranked in the top500 November 2022 list. The construction of the new CNAF data center consists of two phases, corresponding to the computing requirements of LHC: Phase 1 involves an IT power of 3 MW, and Phase 2, starting from 2025, involves an IT power up to 10 MW. The new data center is designed to cope with the computing requirements of the data taking of the HL-LHC experiments, in the time spanning from 2026 to 2040 and will provide, at the same time, computing services for several other INFN experiments and projects, not only belonging to the HEP domain. The co-location with Leonardo opens wider possibilities to integrate HTC and HPC resources and the new CNAF data center will be tightly coupled with it, allowing access from a single entry point to resources located at CNAF and provided by the supercomputer. Data access from both infrastructures will be transparent to users. In this presentation we describe the new data center design, providing a status update on the migration, and we focus on the Leonardo integration showing the results of the preliminary tests to access it from the CNAF access points

    Dataclient: a simple interface for scientific data transfers hiding x.509 complexities

    Get PDF
    Since the current data infrastructure of the HEP experiments is based on GridFTP, most computing centres have adapted and based their own access to the data on the X.509. This is an issue for smaller experiments that do not have the resources to train their researchers in the complexities of X.509 certificates and that would prefer an approach based on username/password. On the other hand, asking computing centres to support different access strategies is not so straightforward, as this would require a significant investment of effort and manpower. At CNAF-INFN Tier1 we tackled this problem by creating a layer on top of the gridftp client/server, that completely hides the X.509 infrastructure under an authentication/authorization process based on the Kerberos realm of our centre, and therefore based on username/password. We called this Dataclient. In this article we will describe both the principles that drove its design and its general architecture, together with the measures undertaken to simplify the user experience and maintenance burden

    Dataclient: a simple interface for scientific data transfers hiding x.509 complexities

    No full text
    Since the current data infrastructure of the HEP experiments is based on GridFTP, most computing centres have adapted and based their own access to the data on the X.509. This is an issue for smaller experiments that do not have the resources to train their researchers in the complexities of X.509 certificates and that would prefer an approach based on username/password. On the other hand, asking computing centres to support different access strategies is not so straightforward, as this would require a significant investment of effort and manpower. At CNAF-INFN Tier1 we tackled this problem by creating a layer on top of the gridftp client/server, that completely hides the X.509 infrastructure under an authentication/authorization process based on the Kerberos realm of our centre, and therefore based on username/password. We called this Dataclient. In this article we will describe both the principles that drove its design and its general architecture, together with the measures undertaken to simplify the user experience and maintenance burden

    The WorldGrid transatlantic testbed: a successful example of Grid interoperability across EU and U.S. domains

    No full text
    The European DataTAG project has taken a major step towards making the concept of a worldwide computing Grid a reality. In collaboration with the companion U.S. project iVDGL, DataTAG has realized an intercontinental testbed spanning Europe and the U.S. integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGrid testbed has been successfully demonstrated at SuperComputing 2002 and IST2002 where real HEP application jobs were transparently submitted from U.S. and Europe using native mechanisms and run where resources were available, independently of their location. In this paper we describe the architecture of the WorldGrid testbed, the problems encountered and the solutions taken in realizing such a testbed. With our work we present an important step towards interoperability of Grid middleware developed and deployed in Europe and the U.S.. Some of the solutions developed in WorldGrid will be adopted by the LHC Computing Grid first service. To the best of our knowledge, this is the first large-scale testbed that combines middleware components and makes them work together.The European DataTAG project has taken a major step towards making the concept of a worldwide computing Grid a reality. In collaboration with the companion U.S. project iVDGL, DataTAG has realized an intercontinental testbed spanning Europe and the U.S. integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGrid testbed has been successfully demonstrated at SuperComputing 2002 and IST2002 where real HEP application jobs were transparently submitted from U.S. and Europe using native mechanisms and run where resources were available, independently of their location. In this paper we describe the architecture of the WorldGrid testbed, the problems encountered and the solutions taken in realizing such a testbed. With our work we present an important step towards interoperability of Grid middleware developed and deployed in Europe and the U.S.. Some of the solutions developed in WorldGrid will be adopted by the LHC Computing Grid first service. To the best of our knowledge, this is the first large-scale testbed that combines middleware components and makes them work together
    corecore