329,244 research outputs found

    The Performability Manager

    Get PDF
    The authors describe the performability manager, a distributed system component that contributes to a more effective and efficient use of system components and prevents quality of service (QoS) degradation. The performability manager dynamically reconfigures distributed systems whenever needed, to recover from failures and to permit the system to evolve over time and include new functionality. Large systems require dynamic reconfiguration to support dynamic change without shutting down the complete system. A distributed system monitor is needed to verify QoS. Monitoring a distributed system is difficult because of synchronization problems and minor differences in clock speeds. The authors describe the functionality and the operation of the performability manager (both informally and formally). Throughout the paper they illustrate the approach by an example distributed application: an ANSAware-based number translation service (NTS), from the intelligent networks (IN) area

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    A Factor Framework for Experimental Design for Performance Evaluation of Commercial Cloud Services

    Full text link
    Given the diversity of commercial Cloud services, performance evaluations of candidate services would be crucial and beneficial for both service customers (e.g. cost-benefit analysis) and providers (e.g. direction of service improvement). Before an evaluation implementation, the selection of suitable factors (also called parameters or variables) plays a prerequisite role in designing evaluation experiments. However, there seems a lack of systematic approaches to factor selection for Cloud services performance evaluation. In other words, evaluators randomly and intuitively concerned experimental factors in most of the existing evaluation studies. Based on our previous taxonomy and modeling work, this paper proposes a factor framework for experimental design for performance evaluation of commercial Cloud services. This framework capsules the state-of-the-practice of performance evaluation factors that people currently take into account in the Cloud Computing domain, and in turn can help facilitate designing new experiments for evaluating Cloud services.Comment: 8 pages, Proceedings of the 4th International Conference on Cloud Computing Technology and Science (CloudCom 2012), pp. 169-176, Taipei, Taiwan, December 03-06, 201

    A requirements engineering framework for integrated systems development for the construction industry

    Get PDF
    Computer Integrated Construction (CIC) systems are computer environments through which collaborative working can be undertaken. Although many CIC systems have been developed to demonstrate the communication and collaboration within the construction projects, the uptake of CICs by the industry is still inadequate. This is mainly due to the fact that research methodologies of the CIC development projects are incomplete to bridge the technology transfer gap. Therefore, defining comprehensive methodologies for the development of these systems and their effective implementation on real construction projects is vital. Requirements Engineering (RE) can contribute to the effective uptake of these systems because it drives the systems development for the targeted audience. This paper proposes a requirements engineering approach for industry driven CIC systems development. While some CIC systems are investigated to build a broad and deep contextual knowledge in the area, the EU funded research project, DIVERCITY (Distributed Virtual Workspace for Enhancing Communication within the Construction Industry), is analysed as the main case study project because its requirements engineering approach has the potential to determine a framework for the adaptation of requirements engineering in order to contribute towards the uptake of CIC systems

    Predicting Network Attacks Using Ontology-Driven Inference

    Full text link
    Graph knowledge models and ontologies are very powerful modeling and re asoning tools. We propose an effective approach to model network attacks and attack prediction which plays important roles in security management. The goals of this study are: First we model network attacks, their prerequisites and consequences using knowledge representation methods in order to provide description logic reasoning and inference over attack domain concepts. And secondly, we propose an ontology-based system which predicts potential attacks using inference and observing information which provided by sensory inputs. We generate our ontology and evaluate corresponding methods using CAPEC, CWE, and CVE hierarchical datasets. Results from experiments show significant capability improvements comparing to traditional hierarchical and relational models. Proposed method also reduces false alarms and improves intrusion detection effectiveness.Comment: 9 page
    • ā€¦
    corecore