56 research outputs found

    e-Infrastructures for e-Science: A Global View

    Get PDF
    In the last 10 years, a new way of doing science is spreading in the world thank to the development of virtual research communities across many geographic and administrative boundaries. A virtual research community is a widely dispersed group of researchers and associated scientific instruments working together in a common virtual environment. This new kind of scientific environment, usually addressed as a "collaboratory", is based on the availability of high-speed networks and broadband access, advanced virtual tools and Grid-middleware technologies which, altogether, are the elements of the e-Infrastructures. The European Commission has heavily invested in promoting this new way of collaboration among scientists funding several international projects with the aim of creating e-Infrastructures to enable the European Research Area and connect the European researchers with their colleagues based in Africa, Asia and Latin America. In this paper we describe the actual status of these e- Infrastructures and present a complete picture of the virtual research communities currently using them. Information on the scientific domains and on the applications supported are provided together with their geographic distribution

    Report on raising public awareness and participation (Deliverable D20)

    Get PDF
    The purpose of this document is to present actions taken during the Cyclops project lifetime in order to raise public awareness and participation, as well as the outcomes of these actions. Dissemination and outreach have always been considered key points for accomplishing this, ever since the project planning phases. The actions are generally framed in the Work Package devoted to dissemination (WP5), although some of them may well be regarded as a horizontal action of the project

    Grid: From EGEE to EGI and from INFN-GRID to IGI

    Get PDF
    In the last fifteen years the approach of the “computational Grid” has changed the way to use computing resources. Grid computing has raised interest worldwide in academia, industry, and government with fast development cycles. Great efforts, huge funding and resources have been made available through national, regional and international initiatives aiming at providing Grid infrastructures, Grid core technologies, Grid middleware and Grid applications. The Grid software layers reflect the architecture of the services developed so far by the most important European and international projects. In this paper Grid e-Infrastructure story is given, detailing European, Italian and international projects such as EGEE, INFN-Grid and NAREGI. In addition the sustainability issue in the long-term perspective is described providing plans by European and Italian communities with EGI and IGI

    D19 final plan for using and disseminating knowledge

    Get PDF
    This document presents the Final Plan for Using and Disseminating Knowledge acquired throughout the development of the CYCLOPS project as deliverable D19. It includes a description of the main achievements in disseminating knowledge, and the consortium and each participant’s plans for the exploitation of the results for the consortium as a whole, or for individual participants or groups of participants. It updates the Plan for Using and Disseminating Knowledge that was presented as Deliverable D4 and describes the final dissemination plan of the CYCLOPS project. This deliverable provides a strategy aimed at addressing various target communities in order to achieve the project dissemination and exploitation goals. After an update of the dissemination instruments employed, the deliverable focuses on the description of the dissemination activities carried out. In addition to the normal dissemination and exploitation of the work through scientific journals and professional bodies, Civil Protection Community will be specifically targeted for dissemination of the CYCLOPS deliverables, and their future exploitation of the results. Other written deliverables focus on presenting dissemination activities in specific subject areas. In particular deliverable D17 reports “the results of the dissemination of EGEE towards the Civil Protection community, and about the coordination between the EGEE and CYCLOPS activities”, deliverable D18 focuses on “collecting the CYCLOPS project results for dissemination towards different interested audiences such as Grid communities, other Civil protection agencies, but also national and international initiative and projects, SMEs, etc.” and deliverable D20 that reports “the extent to which actors beyond the research community have been involved to help spread awareness and to explore the wider societal implications of the proposed work

    An OGC/SOS conformant client to manage geospatial data on the GRID

    Get PDF
    This paper describes a Sensor Observation Service (SOS) client developed to integrate dynamic geospatial data from meteorological sensors, on a grid-based risk management decision support system. The present work is part of the CROSS-Fire project, which uses forest fires as the main case study and the FireStation application to simulate fire spread. The meteorological data is accessed through the SOS standard from Open Geospatial Consortium (OGC), using the Observations and Measurements (O&M) standard encoding format. Since the SOS standard was not designed to directly access sensors, we developed an interface application to load the SOS database with observations from a Vantis Weather Station (WS). To integrate the SOS meteorological data into the FireStation, the developed SOS client was embedded on a Web Processing Service (WPS) algorithm. This algorithm was designed to be functional and fully compliant with SOS, SensorML, and O&M standards from OGC. With minor modifications to the developed SOS database interface, the SOS client works with any WS. This client supports spatial and temporal filters, including the integration of dynamic data from satellites into FireStation, as described.Fundação para a Ciência e a Tecnologia (FCT

    Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    Get PDF
    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about 1TeV11 TeV^{-}1, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the couplings between heavy gauge bosons and quarks. The events were generated using the ATLAS fast simulation and reconstruction MC program Atlfast coupled to the Monte Carlo generator PYTHIA. We found that for an integrated luminosity of 3×105pb13 × 10^{5} pb^{-}1 and a heavy gauge boson mass of 2 TeV, the channels Z*->bb and Z*->tt would be difficult to detect because the signal would be very small compared with the expected backgrou nd, although the significance in the case of Z*->tt is larger. In the channel W*->tb , the decay might yield a signal separable from the background and a significance larger than 5 so we conclude that it would be possible to detect this particular mode at the LHC. The analysis was also performed for masses of 1 TeV and we conclude that the observability decreases with the mass. In particular, a significance higher than 5 may be achieved below approximately 1.4, 1.9 and 2.2 TeV for Z*->bb , Z*->tt and W*->tb respectively. The LHC will start to operate in 2008 and collect data in 2009. It will produce roughly 15 Petabytes of data per year. Access to this experimental data has to be provided for some 5,000 scientists working in 500 research institutes and universities. In addition, all data need to be available over the estimated 15-year lifetime of the LHC. The analysis of the data, including comparison with theoretical simulations, requires an enormous computing power. The computing challenges that scientists have to face are the huge amount of data, calculations to perform and collaborators. The Grid has been proposed as a solution for those challenges. The LHC Computing Grid project (LCG) is the Grid used by ATLAS and the other LHC experiments and it is analised in depth with the aim of studying the possible complementary use of it with another Grid project. That is the Berkeley Open Infrastructure for Network C omputing middle-ware (BOINC) developed for the SETI@home project, a Grid specialised in high CPU requirements and in using volunteer computing resources. Several important packages of physics software used by ATLAS and other LHC experiments have been successfully adapted/ported to be used with this platform with the aim of integrating them into the LHC@home project at CERN: Atlfast, PYTHIA, Geant4 and Garfield. The events used in our physics analysis with Atlfast were reproduced using BOINC obtaining exactly the same results. The LCG software, in particular SEAL, ROOT and the external software, was ported to the Solaris/sparc platform to study it's portability in general as well. A testbed was performed including a big number of heterogeneous hardware and software that involves a farm of 100 computers at CERN's computing center (lxboinc) together with 30 PCs from CIEMAT and 45 from schools from Extremadura (Spain). That required a preliminary study, development and creation of components of the Quattor software and configuration management tool to install and manage the lxboinc farm and it also involved the set up of a collaboration between the Spanish research centers and government and CERN. The testbed was successful and 26,597 Grid jobs were delivered, executed and received successfully. We conclude that BOINC and LCG are complementary and useful kinds of Grid that can be used by ATLAS and the other LHC experiments. LCG has very good data distribution, management and storage capabilities that BOINC does not have. In the other hand, BOINC does not need high bandwidth or Internet speed and it also can provide a huge and inexpensive amount of computing power coming from volunteers. In addition, it is possible to send jobs from LCG to BOINC and vice versa. So, possible complementary cases are to use volunteer BOINC nodes when the LCG nodes have too many jobs to do or to use BOINC for high CPU tasks like event generators or reconstructions while concentrating LCG for data analysis
    corecore