51,907 research outputs found
Extending dependencies for improving data quality
This doctoral thesis presents the results of my work on extending dependencies for
improving data quality, both in a centralized environment with a single database and
in a data exchange and integration environment with multiple databases.
The first part of the thesis proposes five classes of data dependencies, referred to as
CINDs, eCFDs, CFDcs, CFDps and CINDps, to capture data inconsistencies commonly
found in practice in a centralized environment. For each class of these dependencies,
we investigate two central problems: the satisfiability problem and the implication
problem. The satisfiability problem is to determine given a set Σ of dependencies
defined on a database schema R, whether or not there exists a nonempty database D of
R that satisfies Σ. And the implication problem is to determine whether or not a set Σ
of dependencies defined on a database schema R entails another dependency φ on R.
That is, for each database D ofRthat satisfies Σ, the D must satisfy φ as well. These are
important for the validation and optimization of data-cleaning processes. We establish
complexity results of the satisfiability problem and the implication problem for all
these five classes of dependencies, both in the absence of finite-domain attributes and in
the general setting with finite-domain attributes. Moreover, SQL-based techniques are
developed to detect data inconsistencies for each class of the proposed dependencies,
which can be easily implemented on the top of current database management systems.
The second part of the thesis studies three important topics for data cleaning in a
data exchange and integration environment with multiple databases.
One is the dependency propagation problem, which is to determine, given a view
defined on data sources and a set of dependencies on the sources, whether another
dependency is guaranteed to hold on the view. We investigate dependency propagation
for views defined in various fragments of relational algebra, conditional functional
dependencies (CFDs) [FGJK08] as view dependencies, and for source dependencies
given as either CFDs or traditional functional dependencies (FDs). And we establish
lower and upper bounds, all matching, ranging from PTIME to undecidable. These not
only provide the first results for CFD propagation, but also extend the classical work
of FD propagation by giving new complexity bounds in the presence of a setting with
finite domains. We finally provide the first algorithm for computing a minimal cover of
all CFDs propagated via SPC views. The algorithm has the same complexity as one of
the most efficient algorithms for computing a cover of FDs propagated via a projection
view, despite the increased expressive power of CFDs and SPC views. Another one is matching records from unreliable data sources. A class of matching
dependencies (MDs) is introduced for specifying the semantics of unreliable data. As
opposed to static constraints for schema design such as FDs, MDs are developed for
record matching, and are defined in terms of similarity metrics and a dynamic semantics. We identify a special case of MDs, referred to as relative candidate keys (RCKs),
to determine what attributes to compare and how to compare them when matching
records across possibly different relations. We also propose a mechanism for inferring MDs with a sound and complete system, a departure from traditional implication
analysis, such that when we cannot match records by comparing attributes that contain
errors, we may still find matches by using other, more reliable attributes. We finally
provide a quadratic time algorithm for inferring MDs, and an effective algorithm for
deducing quality RCKs from a given set of MDs.
The last one is finding certain fixes for data monitoring [CGGM03, SMO07], which
is to find and correct errors in a tuple when it is created, either entered manually or
generated by some process. That is, we want to ensure that a tuple t is clean before it
is used, to prevent errors introduced by adding t. As noted by [SMO07], it is far less
costly to correct a tuple at the point of entry than fixing it afterward.
Data repairing based on integrity constraints may not find certain fixes that are
absolutely correct, and worse, may introduce new errors when repairing the data. We
propose a method for finding certain fixes, based on master data, a notion of certain
regions, and a class of editing rules. A certain region is a set of attributes that are
assured correct by the users. Given a certain region and master data, editing rules tell
us what attributes to fix and how to update them. We show how the method can be used
in data monitoring and enrichment. We develop techniques for reasoning about editing
rules, to decide whether they lead to a unique fix and whether they are able to fix all
the attributes in a tuple, relative to master data and a certain region. We also provide
an algorithm to identify minimal certain regions, such that a certain fix is warranted by
editing rules and master data as long as one of the regions is correct
A revival of integrity constraints for data cleaning
Integrity constraints,
a.k.a
. data dependencies, are being widely used for improving
the quality of schema
. Recently constraints have enjoyed a revival for
improving the quality of data
. The tutorial aims to provide an overview of recent advances in constraint-based data cleaning.
</jats:p
Determinants Of Healthy Ageing For Older People In European Countries – A Spatio-Temporal Approach
The article aims to investigate the relationship between the length of the further life in healthy for men and women aged 65 years and selected factors in European countries in the period 2005-2012. For this purpose, following methods were used: 1/ spatial distribution of characteristics - rates of change in selected periods: 2005 and 2012, 2/ tests for dependencies using correlograms and Spearman’s rank correlation coefficients, 3/ cluster analysis: on the basis of Ward’s methods spatial similarities (among countries) were indicated. As the source of data the Eurostat database were used
A Product Line Systems Engineering Process for Variability Identification and Reduction
Software Product Line Engineering has attracted attention in the last two
decades due to its promising capabilities to reduce costs and time to market
through reuse of requirements and components. In practice, developing system
level product lines in a large-scale company is not an easy task as there may
be thousands of variants and multiple disciplines involved. The manual reuse of
legacy system models at domain engineering to build reusable system libraries
and configurations of variants to derive target products can be infeasible. To
tackle this challenge, a Product Line Systems Engineering process is proposed.
Specifically, the process extends research in the System Orthogonal Variability
Model to support hierarchical variability modeling with formal definitions;
utilizes Systems Engineering concepts and legacy system models to build the
hierarchy for the variability model and to identify essential relations between
variants; and finally, analyzes the identified relations to reduce the number
of variation points. The process, which is automated by computational
algorithms, is demonstrated through an illustrative example on generalized
Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of
the process in the reduction of variation points, it is further applied to case
studies in different engineering domains at different levels of complexity.
Subject to system model availability, reduction of 14% to 40% in the number of
variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal
on 3rd June 201
- …