32,111 research outputs found

    The Thin Gap Chambers database experience in test beam and preparations for ATLAS

    Full text link
    Thin gap chambers (TGCs) are used for the muon trigger system in the forward region of the LHC experiment ATLAS. The TGCs are expected to provide a trigger signal within 25 ns of the bunch spacing. An extensive system test of the ATLAS muon spectrometer has been performed in the H8 beam line at the CERN SPS during the last few years. A relational database was used for storing the conditions of the tests as well as the configuration of the system. This database has provided the detector control system with the information needed for configuration of the front end electronics. The database is used to assist the online operation and maintenance. The same database is used to store the non event condition and configuration parameters needed later for the offline reconstruction software. A larger scale of the database has been produced to support the whole TGC system. It integrates all the production, QA tests and assembly information. A 1/12th model of the whole TGC system is currently in use for testing the performance of this database in configuring and tracking the condition of the system. A prototype of the database was first implemented during the H8 test beams. This paper describes the database structure, its interface to other systems and its operational performance.Comment: Proceedings IEEE, Nuclear Science Symposium 2005, Stockholm, Sweeden, May 200

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Providing Transaction Class-Based QoS in In-Memory Data Grids via Machine Learning

    Get PDF
    Elastic architectures and the ”pay-as-you-go” resource pricing model offered by many cloud infrastructure providers may seem the right choice for companies dealing with data centric applications characterized by high variable workload. In such a context, in-memory transactional data grids have demonstrated to be particularly suited for exploiting advantages provided by elastic computing platforms, mainly thanks to their ability to be dynamically (re-)sized and tuned. Anyway, when specific QoS requirements have to be met, this kind of architectures have revealed to be complex to be managed by humans. Particularly, their management is a very complex task without the stand of mechanisms supporting run-time automatic sizing/tuning of the data platform and the underlying (virtual) hardware resources provided by the cloud. In this paper, we present a neural network-based architecture where the system is constantly and automatically re-configured, particularly in terms of computing resources

    Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study

    Full text link
    Cloud computing has emerged as a popular paradigm and an attractive model for providing a reliable distributed computing model.it is increasing attracting huge attention both in academic research and industrial initiatives. Cloud deployments are paramount for institution and organizations of all scales. The availability of a flexible, free open source cloud platform designed with no propriety software and the ability of its integration with legacy systems and third-party applications are fundamental. Open stack is a free and opensource software released under the terms of Apache license with a fragmented and distributed architecture making it highly flexible. This project was initiated and aimed at designing a secured cloud infrastructure called BradStack, which is built on OpenStack in the Computing Laboratory at the University of Bradford. In this report, we present and discuss the steps required in deploying a secured BradStack Multi-node cloud infrastructure and conducting Penetration testing on OpenStack Services to validate the effectiveness of the security controls on the BradStack platform. This report serves as a practical guideline, focusing on security and practical infrastructure related issues. It also serves as a reference for institutions looking at the possibilities of implementing a secured cloud solution.Comment: 38 pages, 19 figures

    XUUDB MANUAL

    Get PDF
    The XUUDB server is Attribute Source implementation which can be used by UNICORE servers. It is used to map user credentials (an X509 certificate or X500 distinguished name) to authorization and incarnation attribut

    Self-Configuring Socio-Technical Systems: Redesign at Runtime

    Get PDF
    Modern information systems are becoming more and more socio-technical systems, namely systems composed of human (social) agents and software (technical) systems operating together in a common environment. The structure of such systems has to evolve dynamically in response to the changes of the environment. When new requirements are introduced, when an actor leaves the system or when a new actor comes, the socio-technical structure needs to be redesigned and revised. In this paper, an approach to dynamic reconfiguration of a socio-technical system structure in response to internal or external changes is proposed. The approach is based on planning techniques for generating possible alternative configurations, and local strategies for their evaluation. The reconfiguration mechanism is presented, which makes the socio-technical system self-configuring, and the approach is discussed and analyzed on a simple case study

    Introduction to Security Onion

    Get PDF
    Security Onion is a Network Security Manager (NSM) platform that provides multiple Intrusion Detection Systems (IDS) including Host IDS (HIDS) and Network IDS (NIDS). Many types of data can be acquired using Security Onion for analysis. This includes data related to: Host, Network, Session, Asset, Alert and Protocols. Security Onion can be implemented as a standalone deployment with server and sensor included or with a master server and multiple sensors allowing for the system to be scaled as required. Many interfaces and tools are available for management of the system and analysis of data such as Sguil, Snorby, Squert and Enterprise Log Search and Archive (ELSA). These interfaces can be used for analysis of alerts and captured events and then can be further exported for analysis in Network Forensic Analysis Tools (NFAT) such as NetworkMiner, CapME or Xplico. The Security Onion platform also provides various methods of management such as Secure SHell (SSH) for management of server and sensors and Web client remote access. All of this with the ability to replay and analyse example malicious traffic makes the Security Onion a suitable low cost alternative for Network Security Management. In this paper, we have a feature and functionality review for the Security Onion in terms of: types of data, configuration, interface, tools and system management

    A configuration system for the ATLAS trigger

    Full text link
    The ATLAS detector at CERN's Large Hadron Collider will be exposed to proton-proton collisions from beams crossing at 40 MHz that have to be reduced to the few 100 Hz allowed by the storage systems. A three-level trigger system has been designed to achieve this goal. We describe the configuration system under construction for the ATLAS trigger chain. It provides the trigger system with all the parameters required for decision taking and to record its history. The same system configures the event reconstruction, Monte Carlo simulation and data analysis, and provides tools for accessing and manipulating the configuration data in all contexts.Comment: 4 pages, 2 figures, contribution to the Conference on Computing in High Energy and Nuclear Physics (CHEP06), 13.-17. Feb 2006, Mumbai, Indi
    corecore