8,053 research outputs found

    Requirements for IT Security Metrics - an Argumentation Theory Based Approach

    Get PDF
    The demand for measuring IT security performance is driven by regulatory, financial, and organizational factors. While several best practice metrics have been suggested, we observe a lack of consistent requirements against which IT security metrics can be evaluated. We address this research gap by adopting a methodological approach that is based on argumentation theory and an accompanying literature review. As a result, we derive five key requirements: IT security metrics should be (a) bounded, (b) metrically scaled, (c) reliable, valid and objective, (d) context-specific and (e) computed automatically. We illustrate and discuss the context-specific instantiation of requirements by using the practically used vulnerability scanning coverage and mean-time-to-incident discovery metrics as examples. Finally we summarize further implications of each requirement

    PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case Law

    Full text link
    We introduce PriCL: the first framework for expressing and automatically reasoning about privacy case law by means of precedent. PriCL is parametric in an underlying logic for expressing world properties, and provides support for court decisions, their justification, the circumstances in which the justification applies as well as court hierarchies. Moreover, the framework offers a tight connection between privacy case law and the notion of norms that underlies existing rule-based privacy research. In terms of automation, we identify the major reasoning tasks for privacy cases such as deducing legal permissions or extracting norms. For solving these tasks, we provide generic algorithms that have particularly efficient realizations within an expressive underlying logic. Finally, we derive a definition of deducibility based on legal concepts and subsequently propose an equivalent characterization in terms of logic satisfiability.Comment: Extended versio

    Unleashing the Potential of Argument Mining for IS Research: A Systematic Review and Research Agenda

    Get PDF
    Argument mining (AM) represents the unique use of natural language processing (NLP) techniques to extract arguments from unstructured data automatically. Despite expanding on commonly used NLP techniques, such as sentiment analysis, AM has hardly been applied in information systems (IS) research yet. Consequentially, knowledge about the potentials for the usage of AM on IS use cases appears to be still limited. First, we introduce AM and its current usage in fields beyond IS. To address this research gap, we conducted a systematic literature review on IS literature to identify IS use cases that can potentially be extended with AM. We develop eleven text-based IS research topics that provide structure and context to the use cases and their AM potentials. Finally, we formulate a novel research agenda to guide both researchers and practitioners to design, compare and evaluate the use of AM for text-based applications and research streams in IS

    Development of an Explainability Scale to Evaluate Explainable Artificial Intelligence (XAI) Methods

    Get PDF
    Explainable Artificial Intelligence (XAI) is an area of research that develops methods and techniques to make the results of artificial intelligence understood by humans. In recent years, there has been an increased demand for XAI methods to be developed due to model architectures getting more complicated and government regulations requiring transparency in machine learning models. With this increased demand has come an increased need for instruments to evaluate XAI methods. However, there are few, if none, valid and reliable instruments that take into account human opinion and cover all aspects of explainability. Therefore, this study developed an objective, human-centred questionnaire to evaluate all types of XAI methods. This questionnaire consists of 15 items: 5 items asking about the user’s background information and 10 items evaluating the explainability of the XAI method which were based on the notions of explainability. An experiment was conducted (n = 38) which got participants to evaluate one of two XAI methods using the questionnaire. The results from this experiment were used for exploratory factor analysis which showed that the 10 items related to explainability constitute one factor (Cronbach’s α = 0.81). The results were also used to gather evidence of the questionnaire’s construct validity. It is concluded that this 15-item questionnaire has one factor, has acceptable validity and reliability, and can be used to evaluate and compare XAI methods

    On the integration of trust with negotiation, argumentation and semantics

    Get PDF
    Agreement Technologies are needed for autonomous agents to come to mutually acceptable agreements, typically on behalf of humans. These technologies include trust computing, negotiation, argumentation and semantic alignment. In this paper, we identify a number of open questions regarding the integration of computational models and tools for trust computing with negotiation, argumentation and semantic alignment. We consider these questions in general and in the context of applications in open, distributed settings such as the grid and cloud computing. © 2013 Cambridge University Press.This work was partially supported by the Agreement Technology COST action (IC0801). The authors would like to thank for helpful discussions and comments all participants in the panel on >Trust, Argumentation and Semantics> on 16 December 2009, Agia Napa, CyprusPeer Reviewe
    • …
    corecore