15 research outputs found

    Requirements For Incentive Mechanisms In Industrial Data Ecosystems

    Get PDF
    In the increasingly interconnected business world, economic value is less and less created by one company alone but rather through the combination and enrichment of data by various actors in so-called data ecosystems. The research field around data ecosystems is, however, still in its infancy. In particular, the lack of knowledge about the actual benefits of inter-organisational data sharing is seen as one of the main obstacles why companies are currently not motivated to engage in data ecosystems. This is especially evident in traditional sectors, such as production or logistics, where data is still shared comparatively rarely. However, there is also consensus in these sectors that cross-company data-driven services, such as collaborative condition monitoring, can generate major value for all actors involved. One reason for this discrepancy is that it is often not clear which incentives exist for data providers and how they can generate added value from offering their data to other actors in an ecosystem. Fair and appropriate incentive and revenue sharing mechanisms are needed to ensure reliable cooperation and sustainable ecosystem development. To address this research gap and contribute to a deeper understanding, we conduct a literature review and identify requirements for incentive mechanisms in industrial data ecosystems. The results show, among other things, that technical requirements, such as enabling data usage control, as well as economic aspects, for instance, the fair monetary valuation of data, play an important role in incentive mechanisms in industrial data ecosystems. Understanding these requirements can help practitioners to better comprehend the incentive mechanisms of the ecosystems in which their organisations participate and can ultimately help to create new data-driven products and services

    Model Reka Bentuk Konseptual Operasian Storan Data Bagi Aplikasi Kepintaran Perniagaan

    Get PDF
    The development of business intelligence (BI) applications, involving of data sources, Data Warehouse (DW), Data Mart (DM) and Operational Data Store (ODS), imposes a major challenge to BI developers. This is mainly due to the lack of established models, guidelines and techniques in the development process as compared to system development in the discipline of software engineering. Furthermore, the present BI applications emphasize on the development of strategic information in contrast to operational and tactical. Therefore, the main aim of this study is to propose a conceptual design model for BI applications using ODS (CoDMODS). Through expert validation, the proposed conceptual design model that was developed by means of design science research approach, was found to satisfy nine quality model dimensions, which are, easy to understand, covers clear steps, is relevant and timeless, demonstrates flexibility, scalability, accuracy, completeness and consistency. Additionally, the two prototypes that were developed based on CoDMODS for water supply service (iUBIS) and telecommunication maintenance (iPMS) recorded a high usability average min value of 5.912 using Computer System Usability Questionnaire (CSUQ) instrument. The outcomes of this study, particularly the proposed model, contribute to the analysis and design method for the development of the operational and tactical information in BI applications. The model can be referred as guidelines by BI developers. Furthermore, the prototypes that were developed in the case studies can assist the organizations in using quality information for business operations

    Quality Assessment of Healthcare Databases

    Get PDF
    The assessment of data quality and suitability plays an important role in improving the validity and generalisability of the results of studies based on secondary use of health databases. The availability of more and more updated and valid information on data quality and suitability provides data users and researchers an useful tool to optimize their activities. In this paper, we have summarized and synthesized the main aspects of Data Quality Assessment (DQA) applied in the field of secondary use of healthcare databases, with the aim of drawing attention to the critical aspects having to be considered and developed for improving the correct and effective use of secondary sources. Four developing features are identified: standardizing DQA methods, reporting DQA methods and results, synergy between data managers and data users, role of Institutions. Interdisciplinarity, multi-professionality and connection between government institutions, regulatory bodies, universities and the scientific community will provide the "toolbox" for i) developing standardized and shared DQA methods for health databases, ii) defining the best strategies for disseminating DQA information and results

    Exploiting Context-Dependent Quality Metadata for Linked Data Source Selection

    Get PDF
    The traditional Web is evolving into the Web of Data which consists of huge collections of structured data over poorly controlled distributed data sources. Live queries are needed to get current information out of this global data space. In live query processing, source selection deserves attention since it allows us to identify the sources which might likely contain the relevant data. The thesis proposes a source selection technique in the context of live query processing on Linked Open Data, which takes into account the context of the request and the quality of data contained in the sources to enhance the relevance (since the context enables a better interpretation of the request) and the quality of the answers (which will be obtained by processing the request on the selected sources). Specifically, the thesis proposes an extension of the QTree indexing structure that had been proposed as a data summary to support source selection based on source content, to take into account quality and contextual information. With reference to a specific case study, the thesis also contributes an approach, relying on the Luzzu framework, to assess the quality of a source with respect to for a given context (according to different quality dimensions). An experimental evaluation of the proposed techniques is also provide

    Class Density and Dataset Quality in High-Dimensional, Unstructured Data

    Get PDF
    Copyright © 2022 The Authors. We provide a definition for class density that can be used to measure the aggregate similarity of the samples within each of the classes in a high-dimensional, unstructured dataset. We then put forth several candidate methods for calculating class density and analyze the correlation between the values each method produces with the corresponding individual class test accuracies achieved on a trained model. Additionally, we propose a definition for dataset quality for high-dimensional, unstructured data and show that those datasets that met a certain quality threshold (experimentally demonstrated to be > 10 for the datasets studied) were candidates for eliding redundant data based on the individual class densities

    The Data Repurposing Challenge: New Pressures from Data Analytics

    Get PDF
    When data is collected for the first time, the data collector has in mind the data quality requirements that must be satisfied before it can be used successfully—that is, the data collector ensures “fitness for use”—the commonly agreed upon definition of data quality [Wang and Strong 1996]. However, data that is repurposed [Woodall and Wainman 2015], as opposed to reused, must be managed with multiple different fitness for use requirements in mind, which complicates any data quality enhancements [Ballou and Pazer 1985]
    corecore