148 research outputs found
CQE in OWL 2 QL: A "Longest Honeymoon" Approach (extended version)
Controlled Query Evaluation (CQE) has been recently studied in the context of
Semantic Web ontologies. The goal of CQE is concealing some query answers so as
to prevent external users from inferring confidential information. In general,
there exist multiple, mutually incomparable ways of concealing answers, and
previous CQE approaches choose in advance which answers are visible and which
are not. In this paper, instead, we study a dynamic CQE method, namely, we
propose to alter the answer to the current query based on the evaluation of
previous ones. We aim at a system that, besides being able to protect
confidential data, is maximally cooperative, which intuitively means that it
answers affirmatively to as many queries as possible; it achieves this goal by
delaying answer modifications as much as possible. We also show that the
behavior we get cannot be intensionally simulated through a static approach,
independent of query history. Interestingly, for OWL 2 QL ontologies and policy
expressed through denials, query evaluation under our semantics is first-order
rewritable, and thus in AC0 in data complexity. This paves the way for the
development of practical algorithms, which we also preliminarily discuss in the
paper.Comment: This paper is the extended version of "P.Bonatti, G.Cima, D.Lembo,
L.Marconi, R.Rosati, L.Sauro, and D.F.Savo. Controlled query evaluation in
OWL 2 QL: A "Longest Honeymoon" approach" accepted for publication at ISWC
202
A novel approach to controlled query evaluation in DL-Lite
In Controlled Query Evaluation (CQE) confidential data are protected through a declarative policy and a (optimal) censor, which guarantees that answers to queries are maximized without disclosing secrets. In this paper we consider CQE over Description Logic ontologies and study query answering over all optimal censors. We establish data complexity of the problem for ontologies specified in DL-LiteR and for variants of the censor language, which is the language used by the censor to enforce the policy. In our investigation we also analyze the relationship between CQE and the problem of Consistent Query Answering
Comparative analysis of Fuzzy Tsukamoto's membership functions for determining irrigated rice field feasibility status
The representation of the fuzzy set membership curve consisting of trapezoidal, triangular, and linear shapes, has an important role in the fuzzy logic system. The selection of the curve's shapes determines the useable membership function and affects the fuzzy output value. Previous studies generally used curves that had been employed in predecessors or other studies that did not explain the reason for choosing a fuzzy member curve. This condition became problem because there was not a guide in selecting the appropriate membership function model for the parameters used in the fuzzy process so that most researchers only use membership functions that are commonly used in previous studies or in the same case as their research. The purpose of this study was to determine the effect of selecting trapezoidal and triangular curves on the performance of Tsukamoto's fuzzy logic for determining the rice-fields suitability status. The research methodology comprised 3 main stages. The first stage was data collecting, to collect soil pH values, soil moisture, and air temperature in rice fields. The second stage was the implementation of the Tsukamoto fuzzy. At this stage, two membership function curves were used. The third stage was a comparative analysis of Tsukamoto's fuzzy's output of trapezoidal and triangular curves. The results obtained indicate that there is no significant performance difference between the two different membership functions. The results of the research with the trapezoidal membership function have a better accuracy rate of 93% while the triangular membership function has an accuracy rate of 90%
Ubik--a framework for the development of distributed organizations
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.Includes bibliographical references (leaves 206-210).by Stephen Peter de Jong.Ph.D
OPTIMIZATION OF NONSTANDARD REASONING SERVICES
The increasing adoption of semantic technologies and the corresponding increasing complexity of application requirements are motivating extensions to the standard reasoning paradigms and services supported by such technologies. This thesis focuses on two of such extensions: nonmonotonic reasoning and inference-proof access control.
Expressing knowledge via general rules that admit exceptions is an approach that has been commonly adopted for centuries in areas such as law and science, and more recently in object-oriented programming and computer security. The experiences in developing complex biomedical knowledge bases reported in the literature show that a direct support to defeasible properties and exceptions would be of great help.
On the other hand, there is ample evidence of the need for knowledge confidentiality measures. Ontology languages and Linked Open Data are increasingly being used to encode the private knowledge of companies and public organizations. Semantic Web techniques facilitate merging different sources of knowledge and extract implicit information, thereby putting at risk security and the privacy of individuals. But the same reasoning capabilities can be exploited to protect the confidentiality of knowledge.
Both nonmonotonic inference and secure knowledge base access rely on nonstandard reasoning procedures. The design and realization of these algorithms in a scalable way (appropriate to the ever-increasing size of ontologies and knowledge bases) is carried out by means of a diversified range of optimization techniques such as appropriate module extraction and incremental reasoning. Extensive experimental evaluation shows the efficiency of the developed optimization techniques: (i) for the first time performance compatible with real-time reasoning is obtained for large nonmonotonic ontologies, while (ii) the secure ontology access control proves to be already compatible with practical use in the e-health application scenario.
ICE System: Interruptible control expert system
The Interruptible Control Expert (ICE) System is based on an architecture designed to provide a strong foundation for real-time production rule expert systems. Three principles are adopted to guide the development of ICE. A practical delivery platform must be provided, no specialized hardware can be used to solve deficiencies in the software design. Knowledge of the environment and the rule-base is exploited to improve the performance of a delivered system. The third principle of ICE is to respond to the most critical event, at the expense of the more trivial tasks. Minimal time is spent on classifying the potential importance of environmental events with the majority of the time used for finding the responses. A feature of the system, derived from all three principles, is the lack of working memory. By using a priori information, a fixed amount of memory can be specified for the hardware platform. The absence of working memory removes the dangers of garbage collection during the continuous operation of the controller
Recommended from our members
A study of instance-based algorithms for supervised learning tasks : mathematical, empirical, and psychological evaluations
This dissertation introduces a framework for specifying instance-based algorithms that can solve supervised learning tasks. These algorithms input a sequence of instances and yield a partial concept description, which is represented by a set of stored instances and associated information. This description can be used to predict values for subsequently presented instances. The thesis of this framework is that extensional concept descriptions and lazy generalization strategies can support efficient supervised learning behavior.The instance-based learning framework consists of three components. The pre-processor component transforms an instance into a more palatable form for the performance component, which computes the instance's similarity with a set of stored instances and yields a prediction for its target value(s). Therefore, the similarity and prediction functions impose generalizations on the stored instances to inductively derive predictions. The learning component assesses the accuracy of these prediction(s) and updates partial concept descriptions to improve their predictive accuracy.This framework is evaluated in four ways. First, its generality is evaluated by mathematically determining the classes of symbolic concepts and numeric functions that can be closely approximated by IB_1, a simple algorithm specified by this framework. Second, this framework is empirically evaluated for its ability to specify algorithms that improve IB_1's learning efficiency. Significant efficiency improvements are obtained by instance-based algorithms that reduce storage requirements, tolerate noisy data, and learn domain-specific similarity functions respectively. Alternative component definitions for these algorithms are empirically analyzed in a set of five high-level parameter studies. Third, this framework is evaluated for its ability to specify psychologically plausible process models for categorization tasks. Results from subject experiments indicate a positive correlation between a models' ability to utilize attribute correlation information and its ability to explain psychological phenomena. Finally, this framework is evaluated for its ability to explain and relate a dozen prominent instance-based learning systems. The survey shows that this framework requires only slight modifications to fit these highly diverse systems. Relationships with edited nearest neighbor algorithms, case-based reasoners, and artificial neural networks are also described
- …