6,400 research outputs found

    ScaRR: Scalable Runtime Remote Attestation for Complex Systems

    Full text link
    The introduction of remote attestation (RA) schemes has allowed academia and industry to enhance the security of their systems. The commercial products currently available enable only the validation of static properties, such as applications fingerprint, and do not handle runtime properties, such as control-flow correctness. This limitation pushed researchers towards the identification of new approaches, called runtime RA. However, those mainly work on embedded devices, which share very few common features with complex systems, such as virtual machines in a cloud. A naive deployment of runtime RA schemes for embedded devices on complex systems faces scalability problems, such as the representation of complex control-flows or slow verification phase. In this work, we present ScaRR: the first Scalable Runtime Remote attestation schema for complex systems. Thanks to its novel control-flow model, ScaRR enables the deployment of runtime RA on any application regardless of its complexity, by also achieving good performance. We implemented ScaRR and tested it on the benchmark suite SPEC CPU 2017. We show that ScaRR can validate on average 2M control-flow events per second, definitely outperforming existing solutions.Comment: 14 page

    Service broker based on cloud service description language

    Get PDF

    Virtual integration platform for computational fluid dynamics

    Get PDF
    Computational Fluid Dynamics (CFD) tools used in shipbuilding industry involve multiple disciplines, such as resistance, manoeuvring, and cavitation. Traditionally, the analysis was performed separately and sequentially in each discipline, which often resulted in conflict and inconsistency of hydrodynamic prediction. In an effort to solve such problems for future CFD computations, a Virtual Integration Platform (VIP) has been developed in the University of Strathclyde within two EU FP6 projects - VIRTUE and SAFEDOR1. The VIP provides a holistic collaborative environment for designers with features such as Project/Process Management, Distributed Tools Integration, Global Optimisation, Version Management, and Knowledge Management. These features enhance collaboration among customers, ship design companies, shipyards, and consultancies not least because they bring together the best expertise and resources around the world. The platform has been tested in seven European ship design companies including consultancies. Its main functionalities along with advances are presented in this paper with two industrial applications

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Commissioning of the CMS High Level Trigger

    Get PDF
    The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008
    • …
    corecore