478,402 research outputs found

    Towards self-protecting ubiquitous systems : monitoring trust-based interactions

    Get PDF
    The requirement for spontaneous interaction in ubiquitous computing creates security issues over and above those present in other areas of computing, deeming traditional approaches ineffective. As a result, to support secure collaborations entities must implement self-protective measures. Trust management is a solution well suited to this task as reasoning about future interactions is based on the outcome of past ones. This requires monitoring of interactions as they take place. Such monitoring also allows us to take corrective action when interactions are proceeding unsatisfactorily. In this vein, we first present a trust-based model of interaction based on event structures. We then describe our ongoing work in the development of a monitor architecture which enables self-protective actions to be carried out at critical points during principal interaction. Finally, we discuss some potential directions for future work

    A Bayesian model for event-based trust

    No full text
    The application scenarios envisioned for ‘global ubiquitous computing’ have unique requirements that are often incompatible with traditional security paradigms. One alternative currently being investigated is to support security decision-making by explicit representation of principals’ trusting relationships, i.e., via systems for computational trust. We focus here on systems where trust in a computational entity is interpreted as the expectation of certain future behaviour based on behavioural patterns of the past, and concern ourselves with the foundations of such probabilistic systems. In particular, we aim at establishing formal probabilistic models for computational trust and their fundamental properties. In the paper we define a mathematical measure for quantitatively comparing the effectiveness of probabilistic computational trust systems in various environments. Using it, we compare some of the systems from the computational trust literature; the comparison is derived formally, rather than obtained via experimental simulation as traditionally done. With this foundation in place, we formalise a general notion of information about past behaviour, based on event structures. This yields a flexible trust model where the probability of complex protocol outcomes can be assessed

    Towards Explainability of UAV-Based Convolutional Neural Networks for Object Classification

    Get PDF
    f autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautical Solutions (CAS) Project. One critical research element of ATTRACTOR is explainability of the decision-making across relevant subsystems of an autonomous system. The ability to explain why an autonomous system makes a decision is needed to establish a basis of trustworthiness to safely complete a mission. Convolutional Neural Networks (CNNs) are popular visual object classifiers that have achieved high levels of classification performances without clear insight into the mechanisms of the internal layers and features. To explore the explainability of the internal components of CNNs, we reviewed three feature visualization methods in a layer-by-layer approach using aviation related images as inputs. Our approach to this is to analyze the key components of a classification event in order to generate component labels for features of the classified image at different layers of depths. For example, an airplane has wings, engines, and landing gear. These could possibly be identified somewhere in the hidden layers from the classification and these descriptive labels could be provided to a human or machine teammate while conducting a shared mission and to engender trust. Each descriptive feature may also be decomposed to a combination of primitives such as shapes and lines. We expect that knowing the combination of shapes and parts that create a classification will enable trust in the system and insight into creating better structures for the CNN

    A Logical Framework for Reputation Systems

    No full text
    Reputation systems are meta systems that record, aggregate and distribute information about the past behaviour of principals in an application. Typically, these applications are large-scale open distributed systems where principals are virtually anonymous, and (a priori) have no knowledge about the trustworthiness of each other. Reputation systems serve two primary purposes: helping principals decide whom to trust, and providing an incentive for principals to well-behave. A logical policy-based framework for reputation systems is presented. In the framework, principals specify policies which state precise requirements on the past behaviour of other principals that must be fulfilled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and efficient dynamic algorithms for checking whether a particular behaviour satisfies a property from the language. It is shown how the framework can be extended in several ways, most notably to encompass parameterized events and quantification over parameters. In an extended application, it is illustrated how the framework can be applied for dynamic history-based access control for safe execution of unknown and untrusted programs

    A Formal Framework for Concrete Reputation Systems

    Get PDF
    In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, many existing reputation-based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents’ past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us to encode other policies known from the literature, e.g., ‘one-out-of-k’. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables

    Planning and Leveraging Event Portfolios: Towards a Holistic Theory

    Get PDF
    This conceptual paper seeks to advance the discourse on the leveraging and legacies of events by examining the planning, management, and leveraging of event portfolios. This examination shifts the common focus from analyzing single events towards multiple events and purposes that can enable cross-leveraging among different events in pursuit of attainment and magnification of specific ends. The following frameworks are proposed: (1) event portfolio planning and leveraging, and (2) analyzing events networks and inter-organizational linkages. These frameworks are intended to provide, at this infancy stage of event portfolios research, a solid ground for building theory on the management of different types and scales of events within the context of a portfolio aimed to obtain, optimize and sustain tourism, as well as broader community benefits

    Knowledge management in the voluntary sector: A focus on sharing project know-how and expertise

    Get PDF
    Voluntary sector organisations are operated principally by volunteers who are not obliged to share their knowledge, as might be expected in a for profit company, with a greater consequent loss of knowledge should individuals leave. This research examines how a volunteer-led organisation, the Campaign for Real Ale (CAMRA), acquires, stores and shares its project knowledge in the context of event management. Three annual CAMRA festivals of different sizes and maturity were selected to see how volunteers' knowledge is managed in the process of organising their festivals. Key festival officers were interviewed and focus groups, comprising of festival volunteers, were conducted. While the maturity of a festival and its size seemed to influence the ways in which knowledge was managed there were some commonalities between festivals. Evident was a strong master-apprentice model of learning with little formal training or record keeping except, that is, where legislation and accountability in treasury and health and safety functions were necessary. Trust between volunteers and their need to know and to share information appeared to be dependent, in part, on their perception and confidence in the success of the overarching project organisation, and this helped shape volunteers' knowledge sharing practices. Whilst there was evidence of a laissez-faire approach to codification and the sharing of knowledge, this was less so when volunteers recognised a genuine lack of knowledge which would hinder the success of their festival. The analysis also highlighted factors related to the sharing of knowledge that, it is suggested, have not been identified in the for-profit sector
    corecore