30 research outputs found
DataNet: An emerging cyberinfrastructure for sharing, reusing and preserving digital data for scientific discovery and learning
No abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/64341/1/12085_ftp.pd
Provenance Views for Module Privacy
Scientific workflow systems increasingly store provenance information about
the module executions used to produce a data item, as well as the parameter
settings and intermediate data items passed between module executions. However,
authors/owners of workflows may wish to keep some of this information
confidential. In particular, a module may be proprietary, and users should not
be able to infer its behavior by seeing mappings between all data inputs and
outputs. The problem we address in this paper is the following: Given a
workflow, abstractly modeled by a relation R, a privacy requirement \Gamma and
costs associated with data. The owner of the workflow decides which data
(attributes) to hide, and provides the user with a view R' which is the
projection of R over attributes which have not been hidden. The goal is to
minimize the cost of hidden data while guaranteeing that individual modules are
\Gamma -private. We call this the "secureview" problem. We formally define the
problem, study its complexity, and offer algorithmic solutions
Verification in Privacy Preserving Data Publishing
Privacy preserving data publication is a major concern for both the owners of data and the data publishers. Principles like k-anonymity, l-diversity were proposed to reduce privacy violations. On the other side, no studies were found on verification on the anonymized data in terms of adversarial breach and anonymity levels. However, the anonymized data is still prone to attacks due to the presence of dependencies among quasi-identifiers and sensitive attributes. This paper presents a novel framework to detect the existence of those dependencies and a solution to reduce them. The advantages of our approach are i) privacy violations can be detected, ii) the extent of privacy risk can be measured and iii) re-anonymization can be done on vulnerable blocks of data. The work is further extended to show how the adversarial breach knowledge eventually increased when new tuples are added and an on the fly solution to reduce it is discussed. Experimental results are reported and analyzed