7,463 research outputs found
Recommended from our members
Legal knowledge engineering: Computing, logic and law
The general problem approached in this thesis is that of building computer based legal advisory programs (otherwise known as expert systems or Intelligent Knowledge Based Systems). Such computer systems should be able to provide an individual with advice about either the general legal area being investigated, or advice about how the individual should proceed in a given case.
In part the thesis describes a program (the ELl program) which attempts to confront some of the problems inherent in the building of these systems. The ELl system is seen as an experimental program (currently handling welfare rights legislation) and development vehicle. It is not presented as a final commercially implementable program. We present a detailed criticism of the type of legal knowledge contained within the system.
The second, though in part intertwined, major subject of the thesis describes the jurisprudential aspects of the attempt to model the law by logic, a conjunction which is seen to be at the heart of the computer/law problem. We suggest that the conjunction offers very little to those who are interested in the real application of the real law, and that this is most forcefully seen when a working computer system models that conjunction.
Our conclusion is that neither logic nor rule-based methods are sufficient for handling legal knowledge. The novelty and import of this thesis is not simply that it presents a negative conclusion; rather that it offers a sound theoretical and pragmatic framework for understanding why these methods are insufficient - the limits to the field are, in fact, defined
Large-Scale Legal Reasoning with Rules and Databases
Traditionally, computational knowledge representation and reasoning focused its attention on rich domains such as the law. The main underlying assumption of traditional legal knowledge representation and reasoning is that knowledge and data are both available in main memory. However, in the era of big data, where large amounts of data are generated daily, an increasing rangeof scientific disciplines, as well as business and human activities, are becoming data-driven. This chapter summarises existing research on legal representation and reasoning in order to uncover technical challenges associated both with the integration of rules and databases and with the main concepts of the big data landscape. We expect these challenges lead naturally to future research directions towards achieving large scale legal reasoning with rules and databases
Formalizing Kant’s Rules
This paper formalizes part of the cognitive architecture that Kant develops in the Critique of Pure Reason. The central Kantian notion that we formalize is the rule. As we interpret Kant, a rule is not a declarative conditional stating what would be true if such and such conditions hold. Rather, a Kantian rule is a general procedure, represented by a conditional imperative or permissive, indicating which acts must or may be performed, given certain acts that are already being performed. These acts are not propositions; they do not have truth-values. Our formalization is related to the input/ output logics, a family of logics designed to capture relations between elements that need not have truth-values. In this paper, we introduce KL3 as a formalization of Kant’s conception of rules as conditional imperatives and permissives. We explain how it differs from standard input/output logics, geometric logic, and first-order logic, as well as how it translates natural language sentences not well captured by first-order logic. Finally, we show how the various distinctions in Kant’s much-maligned Table of Judgements emerge as the most natural way of dividing up the various types and sub-types of rule in KL3. Our analysis sheds new light on the way in which normative notions play a fundamental role in the conception of logic at the heart of Kant’s theoretical philosophy
A Logical Method for Policy Enforcement over Evolving Audit Logs
We present an iterative algorithm for enforcing policies represented in a
first-order logic, which can, in particular, express all transmission-related
clauses in the HIPAA Privacy Rule. The logic has three features that raise
challenges for enforcement --- uninterpreted predicates (used to model
subjective concepts in privacy policies), real-time temporal properties, and
quantification over infinite domains (such as the set of messages containing
personal information). The algorithm operates over audit logs that are
inherently incomplete and evolve over time. In each iteration, the algorithm
provably checks as much of the policy as possible over the current log and
outputs a residual policy that can only be checked when the log is extended
with additional information. We prove correctness and termination properties of
the algorithm. While these results are developed in a general form, accounting
for many different sources of incompleteness in audit logs, we also prove that
for the special case of logs that maintain a complete record of all relevant
actions, the algorithm effectively enforces all safety and co-safety
properties. The algorithm can significantly help automate enforcement of
policies derived from the HIPAA Privacy Rule.Comment: Carnegie Mellon University CyLab Technical Report. 51 page
Semantics of logic programs with explicit negation
After a historical introduction, the bulk of the thesis concerns the study of a declarative semantics for logic programs. The main original contributions are: ² WFSX (Well–Founded Semantics with eXplicit negation), a new semantics for logic programs with explicit negation (i.e. extended logic programs), which compares favourably in its properties with other extant semantics. ² A generic characterization schema that facilitates comparisons among a diversity of semantics of extended logic programs, including WFSX. ² An autoepistemic and a default logic corresponding to WFSX, which solve existing problems of the classical approaches to autoepistemic and default logics, and clarify the meaning of explicit negation in logic programs. ² A framework for defining a spectrum of semantics of extended logic programs based on the abduction of negative hypotheses. This framework allows for the characterization of different levels of scepticism/credulity, consensuality, and argumentation. One of the semantics of abduction coincides with WFSX. ² O–semantics, a semantics that uniquely adds more CWA hypotheses to WFSX. The techniques used for doing so are applicable as well to the well–founded semantics of normal logic programs. ² By introducing explicit negation into logic programs contradiction may appear. I present two approaches for dealing with contradiction, and show their equivalence. One of the approaches consists in avoiding contradiction, and is based on restrictions in the adoption of abductive hypotheses. The other approach consists in removing contradiction, and is based in a transformation of contradictory programs into noncontradictory ones, guided by the reasons for contradiction
Fine-grained access control via policy-carrying data
W. W. Vasconcelos acknowledges the support of the Engineering and Physical Sciences Research Council (EPSRC, UK) within the research project “Scrutable Autonomous Systems” (SAsSY, http://www.scrutable-systems.org, Grant ref. EP/J012084/1). Also in: Journal ACM Transactions on Reconfigurable Technology and Systems (TRETS) - Special Section on FCCM 2016 and Regular Papers TRETS Homepage archive Volume 11 Issue 1, March 2018 Article No. 31 ACM New York, NY, USAPeer reviewedPostprin
αCheck: a mechanized metatheory model-checker
The problem of mechanically formalizing and proving metatheoretic properties
of programming language calculi, type systems, operational semantics, and
related formal systems has received considerable attention recently. However,
the dual problem of searching for errors in such formalizations has attracted
comparatively little attention. In this article, we present Check, a
bounded model-checker for metatheoretic properties of formal systems specified
using nominal logic. In contrast to the current state of the art for metatheory
verification, our approach is fully automatic, does not require expertise in
theorem proving on the part of the user, and produces counterexamples in the
case that a flaw is detected. We present two implementations of this technique,
one based on negation-as-failure and one based on negation elimination, along
with experimental results showing that these techniques are fast enough to be
used interactively to debug systems as they are developed.Comment: Under consideration for publication in Theory and Practice of Logic
Programming (TPLP
Program conversing in Portugese providing a library service
TUGA is a program which converses in Portuguese to
provide a library service covering the field of
Artificial Intelligence. The objective of designing the program TUGA was the
development of a feasible method for consulting and
creating data bases in natural Portuguese. The resulting program allows dialogues where the
program and its users behave in the way humans normally
do in a dialogue setting. The program can answer/ and
question in pre-defined scenarios. Users can question/
answer and issue commands in a natural and convenient
way/ without bothering excessively with the form of the
dialogues and sentences. The original contributions of this work are: the
treatment of dialogues. the adaptation of Colmerauer's
natural language framework to Portuguese, the particular
method for evaluating the logical structures involved in
Colmerauer's framework, and the library service
application itself. The program is implemented in Prolog, a simple and
surprisingly powerful programming language essentially
identical in syntax and semantics to a subset of
predicate calculus in clausal form
- …