650 research outputs found
Two Approaches to Ontology Aggregation Based on Axiom Weakening
Axiom weakening is a novel technique that allows
for fine-grained repair of inconsistent ontologies.
In a multi-agent setting, integrating ontologies corresponding
to multiple agents may lead to inconsistencies.
Such inconsistencies can be resolved after
the integrated ontology has been built, or their
generation can be prevented during ontology generation.
We implement and compare these two approaches.
First, we study how to repair an inconsistent
ontology resulting from a voting-based aggregation
of views of heterogeneous agents. Second,
we prevent the generation of inconsistencies by letting
the agents engage in a turn-based rational protocol
about the axioms to be added to the integrated
ontology. We instantiate the two approaches using
real-world ontologies and compare them by measuring
the levels of satisfaction of the agents w.r.t.
the ontology obtained by the two procedures
Inductive analysis of security protocols in Isabelle/HOL with applications to electronic voting
Security protocols are predefined sequences of message exchanges. Their uses over computer networks aim to provide certain guarantees to protocol participants. The sensitive nature of many applications resting on protocols encourages the use of formal methods to provide rigorous correctness proofs. This dissertation presents extensions to the Inductive Method for protocol verification in the Isabelle/HOL interactive theorem prover. The current state of the Inductive Method and of other protocol analysis techniques are reviewed. Protocol composition modelling in the Inductive Method is introduced and put in practice by holistically verifying the composition of a certification protocol with an authentication protocol. Unlike some existing approaches, we are not constrained by independence requirements or search space limitations. A special kind of identity-based signatures, auditable ones, are specified in the Inductive Method and integrated in an analysis of a recent ISO/IEC 9798-3 protocol. A side-by-side verification features both a version of the protocol with auditable identity-based signatures and a version with plain ones. The largest part of the thesis presents extensions for the verification of electronic voting protocols. Innovative specification and verification strategies are described. The crucial property of voter privacy, being the impossibility of knowing how a specific voter voted, is modelled as an unlinkability property between pieces of information. Unlinkability is then specified in the Inductive Method using novel message operators. An electronic voting protocol by Fujioka, Okamoto and Ohta is modelled in the Inductive Method. Its classic confidentiality properties are verified, followed by voter privacy. The approach is shown to be generic enough to be re-usable on other protocols while maintaining a coherent line of reasoning. We compare our work with the widespread process equivalence model and examine respective strengths
âA lot of the time itâs dealing with victims who donât want to know, itâs all made up, or theyâve got mental healthâ: Rape myths in a large English police force
Despite an increase in the reporting of rape, convictions in England and Wales have fallen significantly in recent years. Previous research has found high rape myth acceptance among police officers. Given that the police act as gatekeepers to the criminal justice system, subscribing to rape myths may have significant effects upon victim attrition and conviction rates. This study explores police officersâ use of rape myths and how these may impact investigations and prosecutions. A total of 17 semi-structured interviews were conducted with police officers from a large English police force. The interview data were analysed using the qualitative method of thematic analysis. Although there were instances where officers demonstrated some awareness of the need to dispel or counter rape myths, rape myths were employed by most officers, with the most common relating to (1) victim fabrication (âwomen lieâ) and (2) victim precipitation (âwomen ask for itâ). Recommendations are made around screening and training for police officers
Statistical Epistemic Logic
We introduce a modal logic for describing statistical knowledge, which we
call statistical epistemic logic. We propose a Kripke model dealing with
probability distributions and stochastic assignments, and show a stochastic
semantics for the logic. To our knowledge, this is the first semantics for
modal logic that can express the statistical knowledge dependent on
non-deterministic inputs and the statistical significance of observed results.
By using statistical epistemic logic, we express a notion of statistical
secrecy with a confidence level. We also show that this logic is useful to
formalize statistical hypothesis testing and differential privacy in a simple
and abstract manner
Towards Logical Specification of Statistical Machine Learning
We introduce a logical approach to formalizing statistical properties of
machine learning. Specifically, we propose a formal model for statistical
classification based on a Kripke model, and formalize various notions of
classification performance, robustness, and fairness of classifiers by using
epistemic logic. Then we show some relationships among properties of
classifiers and those between classification performance and robustness, which
suggests robustness-related properties that have not been formalized in the
literature as far as we know. To formalize fairness properties, we define a
notion of counterfactual knowledge and show techniques to formalize conditional
indistinguishability by using counterfactual epistemic operators. As far as we
know, this is the first work that uses logical formulas to express statistical
properties of machine learning, and that provides epistemic (resp.
counterfactually epistemic) views on robustness (resp. fairness) of
classifiers.Comment: SEFM'19 conference paper (full version with errors corrected
- âŠ