32 research outputs found

    Very large scale high performance computing and instrument management for high availability systems through the use of virtualization at the Square Kilometre Array (SKA) telescope

    Get PDF
    The Square Kilometer Array (SKA) Telescope, is an ongoing project set to start its building phase in 2018 and be ready for first light in 2020. The first part of the project, the SKA1 will be comprised of 130.000 low frequency antennas (50 MHz to 350 MHz) and 200 mid frequency antennas (350 MHz to 15.5 GHz). The SKA1 will produce a raw data rate of ~10 Tb/s, require a computing power of 100 Pflop/s and an archiving capacity of hundreds of PB/year. The next phase of the project, the SKA2, is going to increase the number of both low and mid antennas by a factor of 10 and increase the computing requirements accordingly. The key requirements for the project are a very demanding availability of 99.9%, computing scalability and result reproducibility. We propose an approach to enforce these requirements - with an optimal use of resources - by using highly distributed computing and virtualization technologies.publishe

    TM Services: an architecture for monitoring and controlling the Square Kilometre Array (SKA) Telescope Manager (TM)

    Get PDF
    The SKA project is an international effort (10 member and 10 associated countries with the involvement of 100 companies and research institutions) to build the world's largest radio telescope. The SKA Telescope Manager (TM) is the core package of the SKA Telescope aimed at scheduling observations, controlling their execution, monitoring the telescope and so on. To do that, TM directly interfaces with the Local Monitoring and Control systems (LMCs) of the other SKA Elements (for example, Dishes, Correlator and so on), exchanging commands and data with them by using the TANGO controls framework (see [1]). TM in turn needs to be monitored and controlled, in order its continuous and proper operation is ensured and this higher responsibility has been assigned to the TM SER package

    A Cyber Infrastructure for the SKA Telescope Manager

    Get PDF
    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring & Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, M&C components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement

    Experimental evaluation of the usage of ad hoc networks as stubs for multiservice networks

    Get PDF
    This paper describes an experimental evaluation of a multiservice ad hoc network, aimed to be interconnected with an infrastructure, operator-managed network. This network supports the efficient delivery of services, unicast and multicast, legacy and multimedia, to users connected in the ad hoc network. It contains the following functionalities: routing and delivery of unicast and multicast services; distributed QoS mechanisms to support service differentiation and resource control responsive to node mobility; security, charging, and rewarding mechanisms to ensure the correct behaviour of the users in the ad hoc network. This paper experimentally evaluates the performance of multiple mechanisms, and the influence and performance penalty introduced in the network, with the incremental inclusion of new functionalities. The performance results obtained in the different real scenarios may question the real usage of ad-hoc networks for more than a minimal number of hops with such a large number of functionalities deployed

    The CPLEAR detector at CERN

    Get PDF
    The CPLEAR collaboration has constructed a detector at CERN for an extensive programme of CP-, T- and CPT-symmetry studies using K0{\rm K}^0 and Kˉ0\bar{\rm K}^0 produced by the annihilation of pˉ\bar{\rm p}'s in a hydrogen gas target. The K0{\rm K}^0 and Kˉ0\bar{\rm K}^0 are identified by their companion products of the annihilation K±π{\rm K}^{\pm} \pi^{\mp} which are tracked with multiwire proportional chambers, drift chambers and streamer tubes. Particle identification is carried out with a liquid Cherenkov detector for fast separation of pions and kaons and with scintillators which allow the measurement of time of flight and energy loss. Photons are measured with a lead/gas sampling electromagnetic calorimeter. The required antiproton annihilation modes are selected by fast online processors using the tracking chamber and particle identification information. All the detectors are mounted in a 0.44 T uniform field of an axial solenoid of diameter 2 m and length 3.6 m to form a magnetic spectrometer capable of full on-line reconstruction and selection of events. The design, operating parameters and performance of the sub-detectors are described.
    corecore