1,100 research outputs found
ï»żAn Answer Explanation Model for Probabilistic Database Queries
Following the availability of huge amounts of uncertain data, coming from diverse ranges of applications such as sensors, machine learning or mining approaches, information extraction and integration, etc. in recent years, we have seen a revival of interests in probabilistic databases. Queries over these databases result in probabilistic answers. As the process of arriving at these answers is based on the underlying stored uncertain data, we argue that from the standpoint of an end user, it is helpful for such a system to give an explanation on how it arrives at an answer and on which uncertainty assumptions the derived answer is based. In this way, the user with his/her own knowledge can decide how much confidence to place in this probabilistic answer. \ud
The aim of this paper is to design such an answer explanation model for probabilistic database queries. We report our design principles and show the methods to compute the answer explanations. One of the main contributions of our model is that it fills the gap between giving only the answer probability, and giving the full derivation. Furthermore, we show how to balance verifiability and influence of explanation components through the concept of verifiable views. The behavior of the model and its computational efficiency are demonstrated through an extensive performance study
Privacy-preserving targeted advertising scheme for IPTV using the cloud
In this paper, we present a privacy-preserving scheme for targeted advertising via the Internet Protocol TV (IPTV). The scheme uses a communication model involving a collection of viewers/subscribers, a content provider (IPTV), an advertiser, and a cloud server. To provide high quality directed advertising service, the advertiser can utilize not only demographic information of subscribers, but also their watching habits. The latter includes watching history, preferences for IPTV content and watching rate, which are published on the cloud server periodically (e.g. weekly) along with anonymized demographics. Since the published data may leak sensitive information about subscribers, it is safeguarded using cryptographic techniques in addition to the anonymization of demographics. The techniques used by the advertiser, which can be manifested in its queries to the cloud, are considered (trade) secrets and therefore are protected as well. The cloud is oblivious to the published data, the queries of the advertiser as well as its own responses to these queries. Only a legitimate advertiser, endorsed with a so-called {\em trapdoor} by the IPTV, can query the cloud and utilize the query results. The performance of the proposed scheme is evaluated with experiments, which show that the scheme is suitable for practical usage
Class Association Rules Mining based Rough Set Method
This paper investigates the mining of class association rules with rough set
approach. In data mining, an association occurs between two set of elements
when one element set happen together with another. A class association rule set
(CARs) is a subset of association rules with classes specified as their
consequences. We present an efficient algorithm for mining the finest class
rule set inspired form Apriori algorithm, where the support and confidence are
computed based on the elementary set of lower approximation included in the
property of rough set theory. Our proposed approach has been shown very
effective, where the rough set approach for class association discovery is much
simpler than the classic association method.Comment: 10 pages, 2 figure
Revisiting the formal foundation of Probabilistic Databases
One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3) better understand aggregation, and (4) create more opportunities for optimization. The paper presents the formal framework and validates data model independence by showing how to a obtain probabilistic Datalog as well as a probabilistic relational algebra by applying the framework to their non-probabilistic counterparts. We conclude with a discussion on the latter three goals
Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art
Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration methods to make use of the extracted information. Handling uncertainty in extraction and integration process is an important issue to enhance the quality of the data in such integrated systems. This article presents the state of the art of the mentioned areas of research and shows the common grounds and how to integrate information extraction and data integration under uncertainty management cover
Uncertainty-sensitive reasoning for inferring sameAs facts in linked data
albakri2016aInternational audienceDiscovering whether or not two URIs described in Linked Data -- in the same or different RDF datasets -- refer to the same real-world entity is crucial for building applications that exploit the cross-referencing of open data. A major challenge in data interlinking is to design tools that effectively deal with incomplete and noisy data, and exploit uncertain knowledge. In this paper, we model data interlinking as a reasoning problem with uncertainty. We introduce a probabilistic framework for modelling and reasoning over uncertain RDF facts and rules that is based on the semantics of probabilistic Datalog. We have designed an algorithm, ProbFR, based on this framework. Experiments on real-world datasets have shown the usefulness and effectiveness of our approach for data linkage and disambiguation
Preference fusion and Condorcet's Paradox under uncertainty
Facing an unknown situation, a person may not be able to firmly elicit
his/her preferences over different alternatives, so he/she tends to express
uncertain preferences. Given a community of different persons expressing their
preferences over certain alternatives under uncertainty, to get a collective
representative opinion of the whole community, a preference fusion process is
required. The aim of this work is to propose a preference fusion method that
copes with uncertainty and escape from the Condorcet paradox. To model
preferences under uncertainty, we propose to develop a model of preferences
based on belief function theory that accurately describes and captures the
uncertainty associated with individual or collective preferences. This work
improves and extends the previous results. This work improves and extends the
contribution presented in a previous work. The benefits of our contribution are
twofold. On the one hand, we propose a qualitative and expressive preference
modeling strategy based on belief-function theory which scales better with the
number of sources. On the other hand, we propose an incremental distance-based
algorithm (using Jousselme distance) for the construction of the collective
preference order to avoid the Condorcet Paradox.Comment: International Conference on Information Fusion, Jul 2017, Xi'an,
Chin
TOPYDE: A Tool for Physical Database Design
We describe a tool for physical database design based on a combination of theoretical and pragmatic approaches. The tool takes as input a relational schema, the workload defined on the schema, and some additional database characteristics and produces as output a physical schema. For the time being, the tool is tuned towards Ingres
- âŠ