19 research outputs found

    A study of normalisation through subatomic logic

    Get PDF

    Subatomic Proof Systems: Splittable Systems

    Get PDF
    This paper presents the first in a series of results that allow us to develop a theory providing finer control over the complexity of normalisation, and in particular of cut elimination. By considering atoms as self-dual non-commutative connectives, we are able to classify a vast class of inference rules in a uniform and very simple way. This allows us to define simple conditions that are easily verifiable and that ensure normalisation and cut elimination by way of a general theorem. In this paper we define and consider splittable systems, which essentially comprise a large class of linear logics, including MLL and BV, and we prove for them a splitting theorem, guaranteeing cut elimination and other admissibility results as corollaries. In papers to follow, we will extend this result to non-linear logics. The final outcome will be a comprehensive theory giving a uniform treatment for most existing logics and providing a blueprint for the design of future proof systems.Comment: 32 page

    Removing Cycles from Proofs

    Get PDF

    The Glass Box Approach: Verifying Contextual Adherence to Values

    No full text
    Artificial Intelligence (AI) applications are beingused to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to be deployed safely, then people need to understand how the system is interpreting and whether it is adhering to the relevant moral values. Even though transparency is often seen as the requirement in this case, realistically it might notalways be possible or desirable, whereas the needto ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass Box’ around the system by mapping moral values into contextual verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value(s) in a specific context. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems–from deep neural networks to agent-based systems–whereas by making the context explicit we exposethe different perspectives and frameworks that are taken into account when subsuming moral values into specific norms and functionalities. We present a modal logic formalisation of the Glass Box approach which is domain-agnostic, implementable, and expandable.Session 4: AI Value Alignment, Ethics and Bias</p

    Integrating comprehensive human oversight in drone deployment : A conceptual framework applied to the case of military surveillance drones

    No full text
    Accountability is a value often mentioned in the debate on intelligent systems and their increased pervasiveness in our society. When focusing specifically on autonomous systems, a critical gap emerges: although there is much work on governance and attribution of accountability, there is a significant lack of methods for the operationalisation of accountability within the socio-technical layer of autonomous systems. In the case of autonomous unmanned aerial vehicles-or drones— the critical question of how to maintain accountability as they undertake fully autonomous flights becomes increasingly important as their uses multiply in both the commercial and military fields. In this paper, we aim to fill the operationalisation gap by proposing a socio-technical framework to guarantee human oversight and accountability in drone deployments, showing its enforceability in the real case of military surveillance drones. By keeping a focus on accountability and human oversight as values, we align with the emphasis placed on human responsibility, while requiring a concretisation of what these principles mean for each specific application, connecting them with concrete socio-technical requirements. In addition, by constraining the framework to observable elements of pre-and post-deployment, we do not rely on assumptions made on the internal workings of the drone nor the technical fluency of the operator

    Let Me Take Over : Variable Autonomy for Meaningful Human Control

    Get PDF
    As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency
    corecore