175,789 research outputs found
PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case Law
We introduce PriCL: the first framework for expressing and automatically
reasoning about privacy case law by means of precedent. PriCL is parametric in
an underlying logic for expressing world properties, and provides support for
court decisions, their justification, the circumstances in which the
justification applies as well as court hierarchies. Moreover, the framework
offers a tight connection between privacy case law and the notion of norms that
underlies existing rule-based privacy research. In terms of automation, we
identify the major reasoning tasks for privacy cases such as deducing legal
permissions or extracting norms. For solving these tasks, we provide generic
algorithms that have particularly efficient realizations within an expressive
underlying logic. Finally, we derive a definition of deducibility based on
legal concepts and subsequently propose an equivalent characterization in terms
of logic satisfiability.Comment: Extended versio
Philosophy and the practice of Bayesian statistics
A substantial school in the philosophy of science identifies Bayesian
inference with inductive inference and even rationality as such, and seems to
be strengthened by the rise and practical success of Bayesian statistics. We
argue that the most successful forms of Bayesian statistics do not actually
support that particular philosophy but rather accord much better with
sophisticated forms of hypothetico-deductivism. We examine the actual role
played by prior distributions in Bayesian models, and the crucial aspects of
model checking and model revision, which fall outside the scope of Bayesian
confirmation theory. We draw on the literature on the consistency of Bayesian
updating and also on our experience of applied work in social science.
Clarity about these matters should benefit not just philosophy of science,
but also statistical practice. At best, the inductivist view has encouraged
researchers to fit and compare models without checking them; at worst,
theorists have actively discouraged practitioners from performing model
checking because it does not fit into their framework.Comment: 36 pages, 5 figures. v2: Fixed typo in caption of figure 1. v3:
Further typo fixes. v4: Revised in response to referee
Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support
A framework and methodology---termed LogiKEy---for the design and engineering
of ethical reasoners, normative theories and deontic logics is presented. The
overall motivation is the development of suitable means for the control and
governance of intelligent autonomous systems. LogiKEy's unifying formal
framework is based on semantical embeddings of deontic logics, logic
combinations and ethico-legal domain theories in expressive classic
higher-order logic (HOL). This meta-logical approach enables the provision of
powerful tool support in LogiKEy: off-the-shelf theorem provers and model
finders for HOL are assisting the LogiKEy designer of ethical intelligent
agents to flexibly experiment with underlying logics and their combinations,
with ethico-legal domain theories, and with concrete examples---all at the same
time. Continuous improvements of these off-the-shelf provers, without further
ado, leverage the reasoning performance in LogiKEy. Case studies, in which the
LogiKEy framework and methodology has been applied and tested, give evidence
that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure
“Efficiency Flooding”: Black-Box Frontiers and Policy Implications
This research aims to comtribute to the discussion on the importance of theoretically consistent modelling for stochastic efficiency analysis. The robustness of policy suggestions based on inferences from efficiency measures crucially depends on theoretically well-founded estimates. The theoretical consistency of recently published technical efficiency estimates for different sectors and countries is critically reviewed. The results confirm the need for a posteriori checking the regularity of the estimated frontier by the researcher and, if necessary, the a priori imposition of the theoretical requirements.Efficiency Analysis, Functional Form, Mathematical Modelling
The Need for Theoretically Consistent Efficiency Frontiers
The availability of efficiency estimation software freely distributed via the internet and relatively easy to use recently inflated the number of corresponding applications. The resulting efficiency estimates are often used without a critical assessment with respect to the literature on theoretical consistency, flexibility and the choice of the appropriate functional form. The robustness of policy suggestions based on inferences from efficiency measures nevertheless crucially depends on theoretically well-founded estimates. This paper addresses stochastic efficiency measurement by critically reviewing the theoretical consistency of recently published technical efficiency estimates. The results confirm the need for a posteriori checking the regularity of the estimated frontier by the researcher and, if necessary, the a priori imposition of the theoretical requirements.functional form, stochastic efficiency analysis, theoretical consistency, Research and Development/Tech Change/Emerging Technologies, C51, D24, Q12,
A Systematic Aspect-Oriented Refactoring and Testing Strategy, and its Application to JHotDraw
Aspect oriented programming aims at achieving better modularization for a
system's crosscutting concerns in order to improve its key quality attributes,
such as evolvability and reusability. Consequently, the adoption of
aspect-oriented techniques in existing (legacy) software systems is of interest
to remediate software aging. The refactoring of existing systems to employ
aspect-orientation will be considerably eased by a systematic approach that
will ensure a safe and consistent migration.
In this paper, we propose a refactoring and testing strategy that supports
such an approach and consider issues of behavior conservation and (incremental)
integration of the aspect-oriented solution with the original system. The
strategy is applied to the JHotDraw open source project and illustrated on a
group of selected concerns. Finally, we abstract from the case study and
present a number of generic refactorings which contribute to an incremental
aspect-oriented refactoring process and associate particular types of
crosscutting concerns to the model and features of the employed aspect
language. The contributions of this paper are both in the area of supporting
migration towards aspect-oriented solutions and supporting the development of
aspect languages that are better suited for such migrations.Comment: 25 page
Approximate reasoning using terminological models
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved
- …