3,916 research outputs found
Implementing imperfect information in fuzzy databases
Information in real-world applications is often
vague, imprecise and uncertain. Ignoring the inherent imperfect
nature of real-world will undoubtedly introduce some deformation of human perception of real-world and may eliminate several
substantial information, which may be very useful in several
data-intensive applications. In database context, several fuzzy
database models have been proposed. In these works, fuzziness
is introduced at different levels. Common to all these proposals is
the support of fuzziness at the attribute level. This paper proposes
first a rich set of data types devoted to model the different kinds
of imperfect information. The paper then proposes a formal
approach to implement these data types. The proposed approach
was implemented within a relational object database model but it
is generic enough to be incorporated into other database models.ou
Some notes on an extended query language for FSM
FSM is a database model that has been recently proposed by the authors. FSM uses basic concepts of
classification, generalization, aggregation and association that are commonly used in semantic modelling and
supports the fuzziness of real-world at attribute, entity, class and relations intra and inter-classes levels. Hence, it
provides tools to formalize and conceptualize real-world within a manner adapted to human perception of and
reasoning about this real-word. In this paper we briefly review basic concepts of FSM and provide some notes on an
extended query language adapted to it.ou
Conceptual design and implementation of the fuzzy semantic model
FSM is one of few database models that support
fuzziness, uncertainty and impreciseness of real-world at the class
definition level. FSM authorizes an entity to be partially member
of its class according to a given degree of membership that reflects
the level to which the entity verifies the extent properties of this
class. This paper deals with the conceptual design of FSM and
adresses some implementation issues.ou
Ontology-Based Quality Evaluation of Value Generalization Hierarchies for Data Anonymization
In privacy-preserving data publishing, approaches using Value Generalization
Hierarchies (VGHs) form an important class of anonymization algorithms. VGHs
play a key role in the utility of published datasets as they dictate how the
anonymization of the data occurs. For categorical attributes, it is imperative
to preserve the semantics of the original data in order to achieve a higher
utility. Despite this, semantics have not being formally considered in the
specification of VGHs. Moreover, there are no methods that allow the users to
assess the quality of their VGH. In this paper, we propose a measurement
scheme, based on ontologies, to quantitatively evaluate the quality of VGHs, in
terms of semantic consistency and taxonomic organization, with the aim of
producing higher-quality anonymizations. We demonstrate, through a case study,
how our evaluation scheme can be used to compare the quality of multiple VGHs
and can help to identify faulty VGHs.Comment: 18 pages, 7 figures, presented in the Privacy in Statistical
Databases Conference 2014 (Ibiza, Spain
On Fuzzy Concepts
In this paper we try to combine two approaches. One is the theory of knowledge graphs in which concepts are represented by graphs. The other is the axiomatic theory of fuzzy sets (AFS).
The discussion will focus on the idea of fuzzy concept. It will be argued that the fuzziness of a concept in natural language is mainly due to the difference in interpretation that people give to a certain word. As different interpretations lead to different knowledge graphs, the notion of fuzzy concept should be describable in terms of sets of graphs. This leads to a natural introduction of membership values for elements of graphs. Using these membership values we apply AFS theory as well as an alternative approach to calculate fuzzy decision trees, that can be used to determine the most relevant elements of a concept
A look ahead approach to secure multi-party protocols
Secure multi-party protocols have been proposed to enable non-colluding parties to cooperate without a trusted server. Even though such protocols prevent information disclosure other than the objective function, they are quite costly
in computation and communication. Therefore, the high overhead makes it necessary for parties to estimate the utility that can be achieved as a result of the protocol beforehand. In this paper, we propose a look ahead approach, specifically for secure multi-party protocols to achieve distributed
k-anonymity, which helps parties to decide if the utility benefit from the protocol is within an acceptable range before initiating the protocol. Look ahead operation is highly localized and its accuracy depends on the amount of information the parties are willing to share. Experimental results show
the effectiveness of the proposed methods
Data degradation to enhance privacy for the Ambient Intelligence
Increasing research in ubiquitous computing techniques towards the development of an Ambient Intelligence raises issues regarding privacy. To gain the required data needed to enable application in this Ambient Intelligence to offer smart services to users, sensors will monitor users' behavior to fill personal context histories. Those context histories will be stored on database/information systems which we consider as honest: they can be trusted now, but might be subject to attacks in the future. Making this assumption implies that protecting context histories by means of access control might be not enough. To reduce the impact of possible attacks, we propose to use limited retention techniques. In our approach, we present applications a degraded set of data with a retention delay attached to it which matches both application requirements and users privacy wishes. Data degradation can be twofold: the accuracy of context data can be lowered such that the less privacy sensitive parts are retained, and context data can be transformed such that only particular abilities for application remain available. Retention periods can be specified to trigger irreversible removal of the context data from the system
- …