469 research outputs found
EMI Security Architecture
This document describes the various architectures of the three middlewares that comprise the EMI software stack. It also outlines the common efforts in the security area that allow interoperability between these middlewares. The assessment of the EMI Security presented in this document was performed internally by members of the Security Area of the EMI project
Recommended from our members
Research and development of accounting system in grid environment
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The Grid has been recognised as the next-generation distributed computing paradigm by seamlessly integrating heterogeneous resources across administrative domains as a single virtual system. There are an increasing number of scientific and business projects that employ Grid computing technologies for large-scale resource sharing and collaborations. Early adoptions of Grid computing technologies have custom middleware implemented to bridge gaps between heterogeneous computing backbones. These custom solutions form the basis to the emerging Open Grid Service Architecture (OGSA), which aims at addressing common concerns of Grid systems by defining a set of interoperable and reusable Grid services. One of common concerns as defined in OGSA is the Grid accounting service. The main objective of the Grid accounting service is to ensure resources to be shared within a Grid environment in an accountable manner by metering and logging accurate resource usage information. This thesis discusses the origins and fundamentals of Grid computing and accounting service in the context of OGSA profile. A prototype was developed and evaluated based on OGSA accounting-related standards enabling sharing accounting data in a multi-Grid environment, the World-wide Large Hadron Collider Grid (WLCG). Based on this prototype and lessons learned, a generic middleware solution was also implemented as a toolkit that eases migration of existing accounting system to be standard compatible.Engineering and Physical Sciences Research Council (EPSRC), Stanford Universit
Recommended from our members
A distributed analysis and monitoring framework for the compact Muon solenoid experiment and a pedestrian simulation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The design of a parallel and distributed computing system is a very complicated task. It requires a detailed understanding of the design issues and of the theoretical and practical aspects of their solutions. Firstly, this thesis discusses in detail the major concepts and components required to make parallel and distributed computing a reality. A multithreaded and distributed framework capable of analysing the simulation data produced by a pedestrian simulation software was developed. Secondly, this thesis discusses the origins and fundamentals of Grid computing and the motivations for its use in High Energy Physics. Access to the data produced by the Large Hadron Collider (LHC) has to be provided for more than five thousand scientists all over the world. Users who run analysis jobs on the Grid do not necessarily have expertise in Grid computing. Simple, userfriendly and reliable monitoring of the analysis jobs is one of the key components of the operations of the distributed analysis; reliable monitoring is one of the crucial components of the Worldwide LHC Computing Grid for providing the functionality and performance that is required by the LHC experiments. The CMS Dashboard Task Monitoring and the CMS Dashboard Job Summary monitoring applications were developed to serve the needs of the CMS community
Sustainable Paths for Data-Intensive Research Communities at the University of Melbourne: A Report for the Australian Partnership for Sustainable Repositories
This report presents the local project findings with a view to identifying how these findings may add to the knowledge base for informing an e-research strategy for the University of Melbourne. It also provides important considerations for how major Government initiatives in research policy and funding might impact on research data and records management requirements. Eleven research communities from diverse disciplines were consulted for an audit of their data management practices. Researchers from these communities represent a number of diverse disciplines: Applied Economics; Astrophysics; Computer Science and Software Engineering; Education; Ethnography; Experimental Particle Physics; Humanities informatics; Hydrology and Environmental Engineering; Linguistics; Medical informatics; Neuroscience and the Performing Arts. In addition to the specific findings for each group audited, the project findings also provide information about sustainability issues around research data management practices at the university
Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)
The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about , in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the couplings between heavy gauge bosons and quarks. The events were generated using the ATLAS fast simulation and reconstruction MC program Atlfast coupled to the Monte Carlo generator PYTHIA. We found that for an integrated luminosity of and a heavy gauge boson mass of 2 TeV, the channels Z*->bb and Z*->tt would be difficult to detect because the signal would be very small compared with the expected backgrou nd, although the significance in the case of Z*->tt is larger. In the channel W*->tb , the decay might yield a signal separable from the background and a significance larger than 5 so we conclude that it would be possible to detect this particular mode at the LHC. The analysis was also performed for masses of 1 TeV and we conclude that the observability decreases with the mass. In particular, a significance higher than 5 may be achieved below approximately 1.4, 1.9 and 2.2 TeV for Z*->bb , Z*->tt and W*->tb respectively. The LHC will start to operate in 2008 and collect data in 2009. It will produce roughly 15 Petabytes of data per year. Access to this experimental data has to be provided for some 5,000 scientists working in 500 research institutes and universities. In addition, all data need to be available over the estimated 15-year lifetime of the LHC. The analysis of the data, including comparison with theoretical simulations, requires an enormous computing power. The computing challenges that scientists have to face are the huge amount of data, calculations to perform and collaborators. The Grid has been proposed as a solution for those challenges. The LHC Computing Grid project (LCG) is the Grid used by ATLAS and the other LHC experiments and it is analised in depth with the aim of studying the possible complementary use of it with another Grid project. That is the Berkeley Open Infrastructure for Network C omputing middle-ware (BOINC) developed for the SETI@home project, a Grid specialised in high CPU requirements and in using volunteer computing resources. Several important packages of physics software used by ATLAS and other LHC experiments have been successfully adapted/ported to be used with this platform with the aim of integrating them into the LHC@home project at CERN: Atlfast, PYTHIA, Geant4 and Garfield. The events used in our physics analysis with Atlfast were reproduced using BOINC obtaining exactly the same results. The LCG software, in particular SEAL, ROOT and the external software, was ported to the Solaris/sparc platform to study it's portability in general as well. A testbed was performed including a big number of heterogeneous hardware and software that involves a farm of 100 computers at CERN's computing center (lxboinc) together with 30 PCs from CIEMAT and 45 from schools from Extremadura (Spain). That required a preliminary study, development and creation of components of the Quattor software and configuration management tool to install and manage the lxboinc farm and it also involved the set up of a collaboration between the Spanish research centers and government and CERN. The testbed was successful and 26,597 Grid jobs were delivered, executed and received successfully. We conclude that BOINC and LCG are complementary and useful kinds of Grid that can be used by ATLAS and the other LHC experiments. LCG has very good data distribution, management and storage capabilities that BOINC does not have. In the other hand, BOINC does not need high bandwidth or Internet speed and it also can provide a huge and inexpensive amount of computing power coming from volunteers. In addition, it is possible to send jobs from LCG to BOINC and vice versa. So, possible complementary cases are to use volunteer BOINC nodes when the LCG nodes have too many jobs to do or to use BOINC for high CPU tasks like event generators or reconstructions while concentrating LCG for data analysis
Security and Performance Verification of Distributed Authentication and Authorization Tools
Parallel distributed systems are widely used for dealing with massive data sets and high performance computing. Securing parallel distributed systems is problematic. Centralized security tools are likely to cause bottlenecks and introduce a single point of failure. In this paper, we introduce existing distributed authentication and authorization tools. We evaluate the quality of the security tools by verifying their security and performance. For security tool verification, we use process calculus and mathematical modeling languages. Casper, Communicating Sequential Process (CSP) and Failure Divergence Refinement (FDR) to test for security vulnerabilities, Petri nets and Karp Miller trees are used to find performance issues of distributed authentication and authorization methods. Kerberos, PERMIS, and Shibboleth are evaluated. Kerberos is a ticket based distributed authentication service, PERMIS is a role and attribute based distributed authorization service, and Shibboleth is an integration solution for federated single sign-on authentication. We find no critical security and performance issues
3rd EGEE User Forum
We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum
- …