1,356 research outputs found

    A goal-based approach to policy refinement

    Get PDF
    As the interest in using policy-based approaches for systems management grows, it is becoming increasingly important to develop methods for performing analysis and refinement of policy specifications. Although this is an area that researchers have devoted some attention to, none of the proposed solutions address the issue of deriving implementable policies from high-level goals. A key part of the solution to this problem is having the ability to identify the operations, available on the underlying system, which can achieve a given goal. This paper presents an approach by which a formal representation of a system, based on the Event Calculus, can be used in conjunction with abductive reasoning techniques to derive the sequence of operations that will allow a given system to achieve a desired goal. Additionally it outlines how this technique might be used for providing tool support and partial automation for policy refinement. Building on previous work on using formal techniques for policy analysis, the approach presented here applies a transformation of both policy and system behaviour specifications into a formal notation that is based on Event Calculus. Finally, it shows how the overall process could be used in conjunction with UML modelling and illustrates this by means of an example. 1

    Gulfs of Expectation: Eliciting and Verifying Differences in Trust Expectations using Personas

    Get PDF
    Personas are a common tool used in Human Computer Interaction to represent the needs and expectations of a system’s stakeholders, but they are also grounded in large amounts of qualitative data. Our aim is to make use of this data to anticipate the differences between a user persona’s expectations of a system, and the expectations held by its developers. This paper introduces the idea of gulfs of expectation – the gap between the expectations held by a user about a system and its developers, and the expectations held by a developer about the system and its users. By evaluating these differences in expectation against a formal representation of a system, we demonstrate how differences between the anticipated user and developer mental models of the system can be verified. We illustrate this using a case study where persona characteristics were analysed to identify divergent behaviour and potential security breaches as a result of differing trust expectations

    An Ambient Agent Model for Monitoring and Analysing Dynamics of Complex Human Behaviour

    Get PDF
    In ambient intelligent systems, monitoring of a human could consist of more complex tasks than merely identifying whether a certain value of a sensor is above a certain threshold. Instead, such tasks may involve monitoring of complex dynamic interactions between human and environment. In order to enable such more complex types of monitoring, this paper presents a generic agent-based framework. The framework consists of support on various levels of system design, namely: (1) the top level, including the interaction between agents, (2) the agent level, providing support on the design of individual agents, and (3) the level of monitoring complex dynamic behaviour, allowing the specification of the aforementioned complex monitoring properties within the agents. The approach is exemplified by a large case study concerning the assessment of driving behaviour, and is applied to two smaller cases as well (concerning fall detection of elderly, and assistance of naval operations, respectively), which are briefly described. These case studies have illustrated that the presented framework enables developers within ambient intelligence to build systems with more expressiveness regarding their monitoring focus. Moreover, they have shown that the framework is easy to use and applicable in a wide variety of domains. © 2011 - IOS Press and the authors. All rights reserved

    Diagnosing runtime violations of security and dependability properties

    Get PDF
    Monitoring the preservation of security and dependability (S&D) properties of complex software systems is widely accepted as a necessity. Basic monitoring can detect violations but does not always provide sufficient information for deciding what the appropriate response to a violation is. Such decisions often require additional diagnostic information that explains why a violation has occurred and can, therefore, indicate what would be an appropriate response action to it. In this thesis, we describe a diagnostic procedure for generating explanations of violations of S&D properties developed as extension of a runtime monitoring framewoek, called EVEREST. The procedure is based on a combination of abductive and evidential reasoning about violations of S&D properties which are expressed in Event Calculus.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore