6 research outputs found
Supporting the Production of High-Quality Data in Concurrent Plant Engineering Using a MetaDataRepository
In recent years, several process models for data quality management have been proposed. As data quality problems are highly application-specific, these models have to remain abstract. This leaves the question of what to do exactly in a given situation unanswered. The task of implementing a data quality process is usually delegated to data quality experts. To do so, they rely heavily on input from domain experts, especially regarding data quality rules. However, in large engineering projects, the number of rules is very large and different domain experts might have different data quality needs. This considerably complicates the task of the data quality experts. Nevertheless, the domain experts need quality measures to support their decision-making process what data quality problems to solve most urgently. In this paper, we propose a MetaDataRepository architecture which allows domain experts to model their quality expectations without the help from technical experts. It balances three conflicting goals: non-intrusiveness, simple and easy usage for domain experts and sufficient expressive power to handle most common data quality problems in a large concurrent engineering environment
Data and Information Quality Dimensions in Engineering Construction Projects
Poor data and information quality (DQ/IQ) causes delays and cost overruns in engineering construction projects. However, only little DQ/IQ research has been performed in this context. This paper explores quality dimensions in the context of engineering construction projects. The most important dimensions identified by Ge, et al., (2011) is used as a basis and compared with dimensions used in 12 large engineering construction projects in one organization. The findings show that six of these dimensions are in use in those projects: accessibility, security, relevancy, completeness, consistency, and timeliness. In addition, the findings indicate another dimension also very important in this context; logical coherence. The logical coherence dimension compares different data values and determines if there is any illogicality between them. Three dimensions are monitored by rules provided by a DQ/IQ tool, and we discuss about the contributions which such a tool can provide for an engineering construction firm
Managing data and information quality in construction engineering : a system design approach
Following the ADR method made it possible not only to develop a tool, but also to
formulate design principles and abstract the findings to a generalizable level.
This study contributes to knowledge and practice in several ways. First, I offer five design
principles for DQ/IQ assessment tools. These design principles are specifically aimed at
mitigating the unavoidable challenges and their consequences in construction engineering
projects. By accepting these unavoidable challenges and consequences and subsequently
providing means for managing the results in a controlled manner, these principles makes
it possible to avoid project delays and still reach a sufficient level of DQ/IQ in the end.
Second, the development and implementation of a tool in which these design principles
are embedded demonstrates the effectiveness of the design principles. A formal
evaluation performed by comparing a project that used the tool with two projects that did
not, showed a significantly better level of DQ/IQ in the project using the tool.
Third, as a result of implementing the tool in a total of 12 construction engineering
projects, it was possible to determine three needed and sufficient quality dimensions for
rule-based assessment. This finding offers valuable information to theory as well as to
practitioners aiming at assessing DQ/IQ in their projects.
Fourth, by revealing the relationship between unavoidable challenges and their
consequences in construction engineering, this thesis offers unique insights into the nature
of projects in that field, which is highly needed when performing DQ/IQ assessment.
These insights will help DQ/IQ researchers enhance their understanding of a very
complex and under-researched context.
Fifth, by providing a ranked list of DQ/IQ problems experienced at EUMEC, this thesis
offers a more detailed explanation of DQ/IQ problems causing delays and cost overruns
than is the case in previous research.
All in all, this research reduces a gap in the existing literature, namely the scarce amount
of DQ/IQ research on construction engineering. The complexity of this industry makes it
difficult and time consuming for an information systems researcher to fully understand
the nature of construction engineering. This complexity might explain the scarce amount
of research in this cross-disciplinary field, and this thesis helps reduce that gap
Design principles for data quality tools
Data quality is an essential aspect of organizational data management and can facilitate accurate decision-making and building competitive advantages. Nu-merous data quality tools aim to support data quality work by offering automa-tion for different activities, such as data profiling or validation. However, de-spite a long history of tools and research, a lack of data quality remains an issue for many organizations. Data quality tools face changes in the organizational (e.g., evolving data architectures) and technical (e.g., big data) environment. Established tools cannot fully comprehend these changes, and limited prescrip-tive design knowledge on creating adequate tools is available. In this cumula-tive dissertation, we summarize the findings of nine individual studies on the objectives and design of data quality tools. Most importantly, we conducted four case studies on implementing data quality tools in real-world scenarios. In each case, we designed and implemented a separate data quality tool and abstracted the essential design elements. A subsequent cross-case analysis helped us accu-mulate the available design knowledge, resulting in the proposal of 13 general-ized design principles. With the proposal of empirically grounded design knowledge, the dissertation contributes to the managerial and scientific commu-nities. Managers can use our results to create customized data quality tools and assess offerings at the market. Scientifically, we address the lack of prescriptive design knowledge for data quality tools and offer many opportunities to extend our research in multiple directions. The continuous work on data quality tools will help them become more successful in ensuring data fulfills high-quality standards for the benefit of businesses and society
Integration heterogener Produktdaten in verteilten, komplexen Entwicklungsprozessen
Moderne Produkte, wie etwa Automobile oder Maschinen, werden infolge der zunehmenden Digitalisierung komplexer. Neben mechanischen Bauteilen umfassen sie zahlreiche mechatronische,elektronische und elektrische Bauteile. Um unterschiedliche Kundenbedürfnisse, länderspezifische
Charakteristika oder gesetzliche Anforderungen bedienen zu können, muss für diese Produkte eine hohe Variabilität ermöglicht werden.
Die Produktentwicklung erfolgt üblicherweise system- und komponentenorientiert und wird mit Methoden des Concurrent Engineering realisiert. Unterschiedliche Anforderungen und Aufgaben der Produktentwickler führen zu einer autonomen, heterogenen IT-Systemlandschaft, die sowohl aus etablierten Informationssystemen, etwa Produktdatenmanagement-Systemen, aber auch aus fachbereichsspezifischen Lösungen besteht. Während zwischen den etablierten Informationssystemen
häufig Austauschschnittstellen existieren, erfolgt der Abgleich von Produktdaten aus diesen Systemen mit fachbereichsspezifischen Lösungen häufig manuell oder gar nicht. Zusätzlich ist die IT-Systemlandschaft der Produktentwicklung einem ständigem Wandel unterworfen, so
dass Austauschschnittstellen kontinuierlich angepasst und erweitert werden müssen.
Während die unabhängige Entwicklung von Systemen und Komponenten die Entwicklungszeit reduziert, wird es zu verschiedenen Zeitpunkten während der Produktentwicklung notwendig,die autonomen, heterogenen Produktdaten zu synchronisieren. Fehlerhafte und inkonsistente Produktdaten in späten Entwicklungsphasen führen zu erheblichen Kosten, so dass die Kontrolle der Vollständigkeit und Konsistenz von Produktdaten möglichst früh sichergestellt werden
sollte, um eine hohe Produktqualität zu gewährleisten.
Gegenstand dieser Arbeit ist das PROactive Consistency for E/E product Data management(PROCEED)-Framework, das die Integration autonomer, heterogener Produktdaten ermöglicht.
PROCEED unterstützt den gesamten Lebenszyklus der Integration von Produktdaten, beginnend mit der initialen Integration, über die Steuerung und Überwachung des Integrationsprozesses sowie die Unterstützung von Schema- und Datenänderungen.
Um die strukturelle Heterogenität von Produktdaten zu überwinden, werden Informationssysteme in sog. Produktontologien abstrahiert. Die Produktontologien werden anschließend mit Hilfe von Abbildungsregeln und -aktionen in eine gemeinsame Sicht überführt. Auf Basis dieses Modells werden Qualitätsmetriken der Integration, wie z.B. die Konsistenz und Vollständigkeit definiert. Zusätzlich wird das dynamische Verhalten bei Änderungen von Schema und Daten der Produktontologien erläutert. Schließlich wir das PROCEED-Rahmenwerk prototypisch realisiert und in einer Fallstudie angewandt