8 research outputs found

    Framework for Assessing Information System Security Posture Risks

    Get PDF
    In today’s data-driven world, Information Systems, particularly the ones operating in regulated industries, require comprehensive security frameworks to protect against loss of confidentiality, integrity, or availability of data, whether due to malice, accident or otherwise. Once such a security framework is in place, an organization must constantly monitor and assess the overall compliance of its systems to detect and rectify any issues found. This thesis presents a technique and a supporting toolkit to first model dependencies between security policies (referred to as controls) and, second, devise models that associate risk with policy violations. Third, devise algorithms that propagate risk when one or more policies are found to be non-compliant and fourth, propose a technique that evaluates the overall security posture risk of a system as a function of the non-compliant policies, the affected policies, and the time elapsed since these policy violations discovered but not have been mitigated yet. More specifically, the approach is based on modeling the dependencies between the different controls in the NIST 800.53 framework by compiling a dependency multi-graph, devising a fuzzy-reasoning-based risk assessment technique that traverses the dependency multi-graph and assigns an overall security exposure risk score when one or more controls fail, and finally a technique for identifying the strategies an attacker can use, given the failed controls, and for which an organization should defend itself. This approach allows organizations to obtain a bird’s-eye view of their Information Systems’ cyber security posture and help triage the security control checks by focusing on the most vulnerable parts of their Information System ecosystem

    EFFECT OF COGNITIVE BIASES ON HUMAN UNDERSTANDING OF RULE-BASED MACHINE LEARNING MODELS

    Get PDF
    PhDThis thesis investigates to what extent do cognitive biases a ect human understanding of interpretable machine learning models, in particular of rules discovered from data. Twenty cognitive biases (illusions, e ects) are analysed in detail, including identi cation of possibly e ective debiasing techniques that can be adopted by designers of machine learning algorithms and software. This qualitative research is complemented by multiple experiments aimed to verify, whether, and to what extent, do selected cognitive biases in uence human understanding of actual rule learning results. Two experiments were performed, one focused on eliciting plausibility judgments for pairs of inductively learned rules, second experiment involved replication of the Linda experiment with crowdsourcing and two of its modi cations. Altogether nearly 3.000 human judgments were collected. We obtained empirical evidence for the insensitivity to sample size e ect. There is also limited evidence for the disjunction fallacy, misunderstanding of and , weak evidence e ect and availability heuristic. While there seems no universal approach for eliminating all the identi ed cognitive biases, it follows from our analysis that the e ect of many biases can be ameliorated by making rule-based models more concise. To this end, in the second part of thesis we propose a novel machine learning framework which postprocesses rules on the output of the seminal association rule classi cation algorithm CBA [Liu et al, 1998]. The framework uses original undiscretized numerical attributes to optimize the discovered association rules, re ning the boundaries of literals in the antecedent of the rules produced by CBA. Some rules as well as literals from the rules can consequently be removed, which makes the resulting classi er smaller. Benchmark of our approach on 22 UCI datasets shows average 53% decrease in the total size of the model as measured by the total number of conditions in all rules. Model accuracy remains on the same level as for CBA

    Causal inquiry in the social sciences: the promise of process tracing

    Get PDF
    In this thesis I investigate causal inquiry in the social sciences, drawing on examples from various disciplines and in particular from conflict studies. In a backlash against the pervasiveness of statistical methods, in the last decade certain social scientists have focused on finding the causal mechanisms behind observed correlations. To provide evidence for such mechanisms, researchers increasingly rely on ‘process tracing’, a method which attempts to give evidence for causal relations by specifying the chain of events connecting a putative cause and effect of interest. I will ask whether the causal claims process tracers make are defensible, and where they are not defensible I will ask how we can improve the method. Throughout these investigations, I show that the conclusions of process tracing (and indeed ofthe social sciences more generally) are constrained both by the causal structure ofthe social world and by social scientists’ aims and values. My central argument is this: all instances of social phenomena have causally relevant differences, which implies that any research design that requires some comparison between cases (like process tracing) is limited by how we systematize these phenomena. Moreover, such research cannot rely on stable regularities. Nevertheless, to forego causal conclusions altogether is not the right response to these limitations; by carefully outlining our epistemic assumptions we can make progress in causal inquiry. While I use philosophical theories of causation to comment on the feasibility of a social scientific method, I also do the reverse: by investigating a popular contemporary method in the social sciences, I show to what extent our philosophical theories of causation are workable in practice. Thus, this thesis is both a methodological and a philosophical work. Every chapter discusses both a fundamental philosophical position on the social sciences and a relevant case study from the social sciences

    Joint Discourse-aware Concept Disambiguation and Clustering

    Get PDF
    This thesis addresses the tasks of concept disambiguation and clustering. Concept disambiguation is the task of linking common nouns and proper names in a text – henceforth called mentions – to their corresponding concepts in a predefined inventory. Concept clustering is the task of clustering mentions, so that all mentions in one cluster denote the same concept. In this thesis, we investigate concept disambiguation and clustering from a discourse perspective and propose a discourse-aware approach for joint concept disambiguation and clustering in the framework of Markov logic. The contributions of this thesis are fourfold: Joint Concept Disambiguation and Clustering. In previous approaches, concept disambiguation and concept clustering have been considered as two separate tasks (Schütze, 1998; Ji & Grishman, 2011). We analyze the relationship between concept disambiguation and concept clustering and argue that these two tasks can mutually support each other. We propose the – to our knowledge – first joint approach for concept disambiguation and clustering. Discourse-Aware Concept Disambiguation. One of the determining factors for concept disambiguation and clustering is the context definition. Most previous approaches use the same context definition for all mentions (Milne & Witten, 2008b; Kulkarni et al., 2009; Ratinov et al., 2011, inter alia). We approach the question which context is relevant to disambiguate a mention from a discourse perspective and state that different mentions require different notions of contexts. We state that the context that is relevant to disambiguate a mention depends on its embedding into discourse. However, how a mention is embedded into discourse depends on its denoted concept. Hence, the identification of the denoted concept and the relevant concept mutually depend on each other. We propose a binwise approach with three different context definitions and model the selection of the context definition and the disambiguation jointly. Modeling Interdependencies with Markov Logic. To model the interdependencies between concept disambiguation and concept clustering as well as the interdependencies between the context definition and the disambiguation, we use Markov logic (Domingos & Lowd, 2009). Markov logic combines first order logic with probabilities and allows us to concisely formalize these interdependencies. We investigate how we can balance between linguistic appropriateness and time efficiency and propose a hybrid approach that combines joint inference with aggregation techniques. Concept Disambiguation and Clustering beyond English: Multi- and Cross-linguality. Given the vast amount of texts written in different languages, the capability to extend an approach to cope with other languages than English is essential. We thus analyze how our approach copes with other languages than English and show that our approach largely scales across languages, even without retraining. Our approach is evaluated on multiple data sets originating from different sources (e.g. news, web) and across multiple languages. As an inventory, we use Wikipedia. We compare our approach to other approaches and show that it achieves state-of-the-art results. Furthermore, we show that joint concept disambiguating and clustering as well as joint context selection and disambiguation leads to significant improvements ceteris paribus

    Incremental Coreference Resolution for German

    Full text link
    The main contributions of this thesis are as follows: 1. We introduce a general model for coreference and explore its application to German. • The model features an incremental discourse processing algorithm which allows it to coherently address issues caused by underspecification of mentions, which is an especially pressing problem regarding certain German pronouns. • We introduce novel features relevant for the resolution of German pronouns. A subset of these features are made accessible through the incremental architecture of the discourse processing model. • In evaluation, we show that the coreference model combined with our features provides new state-of-the-art results for coreference and pronoun resolution for German. 2. We elaborate on the evaluation of coreference and pronoun resolution. • We discuss evaluation from the view of prospective downstream applications that benefit from coreference resolution as a preprocessing component. Addressing the shortcomings of the general evaluation framework in this regard, we introduce an alternative framework, the Application Related Coreference Scores (ARCS). • The ARCS framework enables a thorough comparison of different system outputs and the quantification of their similarities and differences beyond the common coreference evaluation. We demonstrate how the framework is applied to state-of-the-art coreference systems. This provides a method to track specific differences in system outputs, which assists researchers in comparing their approaches to related work in detail. 3. We explore semantics for pronoun resolution. • Within the introduced coreference model, we explore distributional approaches to estimate the compatibility of an antecedent candidate and the occurrence context of a pronoun. We compare a state-of-the-art approach for word embeddings to syntactic co-occurrence profiles to this end. • In comparison to related work, we extend the notion of context and thereby increase the applicability of our approach. We find that a combination of both compatibility models, coupled with the coreference model, provides a large potential for improving pronoun resolution performance. We make available all our resources, including a web demo of the system, at: http://pub.cl.uzh.ch/purl/coreference-resolutio

    Foundations of Security Analysis and Design III, FOSAD 2004/2005- Tutorial Lectures

    Get PDF
    he increasing relevance of security to real-life applications, such as electronic commerce and Internet banking, is attested by the fast-growing number of research groups, events, conferences, and summer schools that address the study of foundations for the analysis and the design of security aspects. This book presents thoroughly revised versions of eight tutorial lectures given by leading researchers during two International Schools on Foundations of Security Analysis and Design, FOSAD 2004/2005, held in Bertinoro, Italy, in September 2004 and September 2005. The lectures are devoted to: Justifying a Dolev-Yao Model under Active Attacks, Model-based Security Engineering with UML, Physical Security and Side-Channel Attacks, Static Analysis of Authentication, Formal Methods for Smartcard Security, Privacy-Preserving Database Systems, Intrusion Detection, Security and Trust Requirements Engineering
    corecore