1,090 research outputs found
Secure Querying of Recursive XML Views: A Standard XPath-based Technique
Most state-of-the art approaches for securing XML documents allow users to
access data only through authorized views defined by annotating an XML grammar
(e.g. DTD) with a collection of XPath expressions. To prevent improper
disclosure of confidential information, user queries posed on these views need
to be rewritten into equivalent queries on the underlying documents. This
rewriting enables us to avoid the overhead of view materialization and
maintenance. A major concern here is that query rewriting for recursive XML
views is still an open problem. To overcome this problem, some works have been
proposed to translate XPath queries into non-standard ones, called Regular
XPath queries. However, query rewriting under Regular XPath can be of
exponential size as it relies on automaton model. Most importantly, Regular
XPath remains a theoretical achievement. Indeed, it is not commonly used in
practice as translation and evaluation tools are not available. In this paper,
we show that query rewriting is always possible for recursive XML views using
only the expressive power of the standard XPath. We investigate the extension
of the downward class of XPath, composed only by child and descendant axes,
with some axes and operators and we propose a general approach to rewrite
queries under recursive XML views. Unlike Regular XPath-based works, we provide
a rewriting algorithm which processes the query only over the annotated DTD
grammar and which can run in linear time in the size of the query. An
experimental evaluation demonstrates that our algorithm is efficient and scales
well.Comment: (2011
A General Approach for Securely Querying and Updating XML Data
Over the past years several works have proposed access control models for XML
data where only read-access rights over non-recursive DTDs are considered. A
few amount of works have studied the access rights for updates. In this paper,
we present a general model for specifying access control on XML data in the
presence of update operations of W3C XQuery Update Facility. Our approach for
enforcing such updates specifications is based on the notion of query rewriting
where each update operation defined over arbitrary DTD (recursive or not) is
rewritten to a safe one in order to be evaluated only over XML data which can
be updated by the user. We investigate in the second part of this report the
secure of XML updating in the presence of read-access rights specified by a
security views. For an XML document, a security view represents for each class
of users all and only the parts of the document these users are able to see. We
show that an update operation defined over a security view can cause disclosure
of sensitive data hidden by this view if it is not thoroughly rewritten with
respect to both read and update access rights. Finally, we propose a security
view based approach for securely updating XML in order to preserve the
confidentiality and integrity of XML data.Comment: No. RR-7870 (2012
Static analysis of XML security views and query rewriting
International audienceIn this paper, we revisit the view based security framework for XML without imposing any of the previously considered restrictions on the class of queries, the class of DTDs, and the type of annotations used to define the view. First, we study {\em query rewriting} with views when the classes used to define queries and views are Regular XPath and MSO. Next, we investigate problems of {\em static analysis} of security access specifications (SAS): we introduce the novel class of \emph{interval-bounded} SAS and we define three different manners to compare views (i.e. queries), with a security point of view. We provide a systematic study of the complexity for deciding these three comparisons, when the depth of the XML documents is bounded, when the document may have an arbitrary depth but the queries defining the views are restricted to guarantee the interval-bounded property, and in the general setting without restriction on queries and document
From Relations to XML: Cleaning, Integrating and Securing Data
While relational databases are still the preferred approach for storing data, XML is emerging
as the primary standard for representing and exchanging data. Consequently, it has
been increasingly important to provide a uniform XML interface to various data sources—
integration; and critical to protect sensitive and confidential information in XML data —
access control. Moreover, it is preferable to first detect and repair the inconsistencies in
the data to avoid the propagation of errors to other data processing steps. In response to
these challenges, this thesis presents an integrated framework for cleaning, integrating and
securing data.
The framework contains three parts. First, the data cleaning sub-framework makes
use of a new class of constraints specially designed for improving data quality, referred
to as conditional functional dependencies (CFDs), to detect and remove inconsistencies in
relational data. Both batch and incremental techniques are developed for detecting CFD
violations by SQL efficiently and repairing them based on a cost model. The cleaned relational
data, together with other non-XML data, is then converted to XML format by using
widely deployed XML publishing facilities. Second, the data integration sub-framework
uses a novel formalism, XML integration grammars (XIGs), to integrate multi-source XML
data which is either native or published from traditional databases. XIGs automatically
support conformance to a target DTD, and allow one to build a large, complex integration
via composition of component XIGs. To efficiently materialize the integrated data, algorithms
are developed for merging XML queries in XIGs and for scheduling them. Third, to
protect sensitive information in the integrated XML data, the data security sub-framework
allows users to access the data only through authorized views. User queries posed on these
views need to be rewritten into equivalent queries on the underlying document to avoid the
prohibitive cost of materializing and maintaining large number of views. Two algorithms
are proposed to support virtual XML views: a rewriting algorithm that characterizes the
rewritten queries as a new form of automata and an evaluation algorithm to execute the
automata-represented queries. They allow the security sub-framework to answer queries
on views in linear time.
Using both relational and XML technologies, this framework provides a uniform approach
to clean, integrate and secure data. The algorithms and techniques in the framework
have been implemented and the experimental study verifies their effectiveness and efficiency
XML Security Views Revisited
International audienceIn this paper, we revisit the view based security framework for XML without imposing any of the previously considered restrictions on the class of queries, the class of DTDs, and the type of annotations used to dene the view. First, we show that the full class of Regular XPath queries is closed under query rewriting. Next, we address the problem of constructing a DTD that describes the view schema, which in general needs not be regular. We propose three dierent methods of ap- proximating the view schema and we show that the produced DTDs are indistinguishable from the exact schema (with queries from a class speci c for each method). Finally, we investigate problems of static analysis of security access specications
Repetitive querying of large random heterogeneous datasets in RDBMS using materialized views
A methodology has been developed to increase time efficiency of querying large heterogeneous datasets repetitively by applying materialized views on repetitive complex queries. Additionally, a simple user interface is provided to demonstrate the utility of this research methodology. The programs demonstrate sufficiently that the core design can be used to deploy a complete system which could be used in different domains. The methodology as developed in this research is presented as an experimental proof-of-concept prototype based on an abstract design
Repairing Inconsistent XML Write-Access Control Policies
XML access control policies involving updates may contain security flaws,
here called inconsistencies, in which a forbidden operation may be simulated by
performing a sequence of allowed operations. This paper investigates the
problem of deciding whether a policy is consistent, and if not, how its
inconsistencies can be repaired. We consider policies expressed in terms of
annotated DTDs defining which operations are allowed or denied for the XML
trees that are instances of the DTD. We show that consistency is decidable in
PTIME for such policies and that consistent partial policies can be extended to
unique "least-privilege" consistent total policies. We also consider repair
problems based on deleting privileges to restore consistency, show that finding
minimal repairs is NP-complete, and give heuristics for finding repairs.Comment: 25 pages. To appear in Proceedings of DBPL 200
Query Evaluation in the Presence of Fine-grained Access Control
Access controls are mechanisms to enhance security by protecting
data from unauthorized accesses. In contrast to traditional access
controls that grant access rights at the granularity of the whole
tables or views, fine-grained access controls specify access
controls at finer granularity, e.g., individual nodes in XML
databases and individual tuples in relational databases.
While there is a voluminous literature on specifying and modeling
fine-grained access controls, less work has been done to address
the performance issues of database systems with fine-grained
access controls. This thesis addresses the performance issues of
fine-grained access controls and proposes corresponding solutions.
In particular, the following issues are addressed: effective
storage of massive access controls, efficient query planning for
secure query evaluation, and accurate cardinality estimation for
access controlled data.
Because fine-grained access controls specify access rights from
each user to each piece of data in the system, they are
effectively a massive matrix of the size as the product of the
number of users and the size of data. Therefore, fine-grained
access controls require a very compact encoding to be feasible.
The proposed storage system in this thesis achieves an
unprecedented level of compactness by leveraging the high
correlation of access controls found in real system data. This
correlation comes from two sides: the structural similarity of
access rights between data, and the similarity of access patterns
from different users. This encoding can be embedded into a
linearized representation of XML data such that a query evaluation
framework is able to compute the answer to the access controlled
query with minimal disk I/O to the access controls.
Query optimization is a crucial component for database systems.
This thesis proposes an intelligent query plan caching mechanism
that has lower amortized cost for query planning in the presence
of fine-grained access controls. The rationale behind this query
plan caching mechanism is that the queries, customized by
different access controls from different users, may share common
upper-level join trees in their optimal query plans. Since join
plan generation is an expensive step in query optimization,
reusing the upper-level join trees will reduce query optimization
significantly. The proposed caching mechanism is able to match
efficient query plans to access controlled query plans with
minimal runtime cost.
In case of a query plan cache miss, the optimizer needs to
optimize an access controlled query from scratch. This depends on
accurate cardinality estimation on the size of the intermediate
query results. This thesis proposes a novel sampling scheme that
has better accuracy than traditional cardinality estimation
techniques
Processing Rank-Aware Queries in Schema-Based P2P Systems
Effiziente Anfragebearbeitung in Datenintegrationssystemen sowie in
P2P-Systemen ist bereits seit einigen Jahren ein Aspekt aktueller
Forschung. Konventionelle Datenintegrationssysteme bestehen aus mehreren
Datenquellen mit ggf. unterschiedlichen Schemata, sind hierarchisch
aufgebaut und besitzen eine zentrale Komponente: den Mediator, der ein
globales Schema verwaltet. Anfragen an das System werden auf diesem
globalen Schema formuliert und vom Mediator bearbeitet, indem relevante
Daten von den Datenquellen transparent für den Benutzer angefragt werden.
Aufbauend auf diesen Systemen entstanden schließlich
Peer-Daten-Management-Systeme (PDMSs) bzw. schemabasierte P2P-Systeme. An
einem PDMS teilnehmende Knoten (Peers) können einerseits als Mediatoren
agieren andererseits jedoch ebenso als Datenquellen. Darüber hinaus sind
diese Peers autonom und können das Netzwerk jederzeit verlassen bzw.
betreten. Die potentiell riesige Datenmenge, die in einem derartigen
Netzwerk verfügbar ist, führt zudem in der Regel zu sehr großen
Anfrageergebnissen, die nur schwer zu bewältigen sind. Daher ist das
Bestimmen einer vollständigen Ergebnismenge in vielen Fällen äußerst
aufwändig oder sogar unmöglich. In diesen Fällen bietet sich die
Anwendung von Top-N- und Skyline-Operatoren, ggf. in Verbindung mit
Approximationstechniken, an, da diese Operatoren lediglich diejenigen
Datensätze als Ergebnis ausgeben, die aufgrund nutzerdefinierter
Ranking-Funktionen am relevantesten für den Benutzer sind. Da durch die
Anwendung dieser Operatoren zumeist nur ein kleiner Teil des Ergebnisses
tatsächlich dem Benutzer ausgegeben wird, muss nicht zwangsläufig die
vollständige Ergebnismenge berechnet werden sondern nur der Teil, der
tatsächlich relevant für das Endergebnis ist.
Die Frage ist nun, wie man derartige Anfragen durch die Ausnutzung dieser
Erkenntnis effizient in PDMSs bearbeiten kann. Die Beantwortung dieser
Frage ist das Hauptanliegen dieser Dissertation. Zur Lösung dieser
Problemstellung stellen wir effiziente Anfragebearbeitungsstrategien in
PDMSs vor, die die charakteristischen Eigenschaften ranking-basierter
Operatoren sowie Approximationstechniken ausnutzen. Peers werden dabei
sowohl auf Schema- als auch auf Datenebene hinsichtlich der Relevanz ihrer
Daten geprüft und dementsprechend in die Anfragebearbeitung einbezogen
oder ausgeschlossen. Durch die Heterogenität der Peers werden Techniken
zum Umschreiben einer Anfrage von einem Schema in ein anderes nötig. Da
existierende Techniken zum Umschreiben von Anfragen zumeist nur konjunktive
Anfragen betrachten, stellen wir eine Erweiterung dieser Techniken vor, die
Anfragen mit ranking-basierten Anfrageoperatoren berücksichtigt. Da PDMSs
dynamische Systeme sind und teilnehmende Peers jederzeit ihre Daten ändern
können, betrachten wir in dieser Dissertation nicht nur wie Routing-Indexe
verwendet werden, um die Relevanz eines Peers auf Datenebene zu bestimmen,
sondern auch wie sie gepflegt werden können. Schließlich stellen wir
SmurfPDMS (SiMUlating enviRonment For Peer Data Management Systems) vor,
ein System, welches im Rahmen dieser Dissertation entwickelt wurde und alle
vorgestellten Techniken implementiert.In recent years, there has been considerable research with respect to query
processing in data integration and P2P systems. Conventional data
integration systems consist of multiple sources with possibly different
schemas, adhere to a hierarchical structure, and have a central component
(mediator) that manages a global schema. Queries are formulated against
this global schema and the mediator processes them by retrieving relevant
data from the sources transparently to the user. Arising from these
systems, eventually Peer Data Management Systems (PDMSs), or schema-based
P2P systems respectively, have attracted attention. Peers participating in
a PDMS can act both as a mediator and as a data source, are autonomous, and
might leave or join the network at will. Due to these reasons peers often
hold incomplete or erroneous data sets and mappings. The possibly huge
amount of data available in such a network often results in large query
result sets that are hard to manage. Due to these reasons, retrieving the
complete result set is in most cases difficult or even impossible. Applying
rank-aware query operators such as top-N and skyline, possibly in
conjunction with approximation techniques, is a remedy to these problems as
these operators select only those result records that are most relevant to
the user. Being aware that in most cases only a small fraction of the
complete result set is actually output to the user, retrieving the complete
set before evaluating such operators is obviously inefficient.
Therefore, the questions we want to answer in this dissertation are how to
compute such queries in PDMSs and how to do that efficiently. We propose
strategies for efficient query processing in PDMSs that exploit the
characteristics of rank-aware queries and optionally apply approximation
techniques. A peer's relevance is determined on two levels: on schema-level
and on data-level. According to its relevance a peer is either considered
for query processing or not. Because of heterogeneity queries need to be
rewritten, enabling cooperation between peers that use different schemas.
As existing query rewriting techniques mostly consider conjunctive queries
only, we present an extension that allows for rewriting queries involving
rank-aware query operators. As PDMSs are dynamic systems and peers might
update their local data, this dissertation addresses not only the problem
of considering such structures within a query processing strategy but also
the problem of keeping them up-to-date. Finally, we provide a system-level
evaluation by presenting SmurfPDMS (SiMUlating enviRonment For Peer Data
Management Systems) -- a system created in the context of this dissertation
implementing all presented techniques
- …