314 research outputs found

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Workflow repository for providing configurable workflow in ERP

    Get PDF
    Workflow pada ERP dengan domain fungsi yang besar rentan dengan adanya duplikasi. Membuat workflow repository yang menyimpan berbagai macam workflow dari proses bisnis ERP yang dapat digunakan untuk menyusun workflow baru sesuai kebutuhan tenant baru Metode yang diusulkan: Metode yang diusulkan terdiri dari 2 tahapan, preprocessing dan processing. Tahap preprocessing bertujuan untuk mencari common dan sub variant dari existing workflow variant. Workflow variant yang disimpan oleh pengguna adalah Procure to Pay workflow. Variasi tersebut diseleksi berdasarkan kemiripannya dengan similarity filtering, kemudian dimerge untuk mencari common dan sub variantnya. Common dan sub variant disimpan menggunakan metadata yang dipetakan pada basis data relasional. Deteksi common dan sub variant workflow mencapai tingkat akurasi sebesar 92%. Ccommon workflow terdiri dari 3-common dari 8-variant workflow. Common workflow tersebut memiliki tingkat kompleksitas lebih rendah 10% dari model sebelumnya. Tahapan processing adalah tahapan penyediaan configurable workflow. Pengguna memasukan query model untuk mencari workflow yang diinginkan. Dengan menggunakan metode similarity filtering, didapatkan common dan/atau sub variant yang memungkinkan. Pengguna dapat menggunakan common workflow melalui workflow designer untuk melakukan rekomposisi ulang. Penyediaan configurable workflow oleh ERP mencapai tingkat 100% dimana apapun yang diinginkan pengguna dapat disediakaan workflownya oleh ERP, ataupun sebagai dasar membentuk workflow yang lain. Berdasarkan hasil percobaan, tempat penyimpanan workflow dapat dibangun dengan arsitektur yang diajukan dan mampu menyimpan dan menyediakan workflow. Tempat penyimpanan ERP mampu mendeteksi workflow yang bersifat common dan sub variant. Tempat penyimpanan ERP mampu menyediakan configurable workflow, dimana pengguna dapat memanfaatkan common dan sub variant workflow untuk menjadi dasar mengkomposisi workflow yang lain. =================================================================================================== Workflow in ERP which covered big domain faced duplication issues. Scope of this research was developing workflow from business process ERP which could be used for required workflow as user needs. Proposed approach consisted of 2 stages preprocessing and processing. Preprocessing stages aimed for finding common and variant of sub workflow based on existing workflow variant. The workflow variants that were stored by user were procured to pay workflow. The workflows was filtered by similarity filtering method then merged for identifying the common and variant of sub workflow. The common and sub variant workflow were stored using metadata that mapped into relational database. The common and variant of sub workflow detection achieved 92% accuracy. The common workflow consisted of 3- the common workflow from 8-variant workflow. The common workflow has 10% lesser complexity than its predecessor. Processing was providing configurable workflow. User inputted query model to find required workflow. Utilizing similarity filtering, possible the common and variant of sub workflow was collected. User used the common workflow through workflow designer to recompose. Providing configurable workflow ERP achieved 100%, where any user need would be provided by ERP, as workflow or as based template for creating other. Based on evaluation, repository was built based on proposed architecture and was able to store or provide workflow. Repository detected workflow whether common or variant of sub workflow. Repository ERP was able to provide configurable ERP, where user utilized common and variant of sub workflow as based for creating one of their need

    Semantic Business Process Modeling

    Get PDF
    This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios

    Blueprint model and language for engineering cloud applications

    Get PDF
    Abstract: The research presented in this thesis is positioned within the domain of engineering CSBAs. Its contribution is twofold: (1) a uniform specification language, called the Blueprint Specification Language (BSL), for specifying cloud services across several cloud vendors and (2) a set of associated techniques, called the Blueprint Manipulation Techniques (BMTs), for publishing, querying, and composing cloud service specifications with aim to support the flexible design and configuration of an CSBA.

    Strategies for Managing Linked Enterprise Data

    Get PDF
    Data, information and knowledge become key assets of our 21st century economy. As a result, data and knowledge management become key tasks with regard to sustainable development and business success. Often, knowledge is not explicitly represented residing in the minds of people or scattered among a variety of data sources. Knowledge is inherently associated with semantics that conveys its meaning to a human or machine agent. The Linked Data concept facilitates the semantic integration of heterogeneous data sources. However, we still lack an effective knowledge integration strategy applicable to enterprise scenarios, which balances between large amounts of data stored in legacy information systems and data lakes as well as tailored domain specific ontologies that formally describe real-world concepts. In this thesis we investigate strategies for managing linked enterprise data analyzing how actionable knowledge can be derived from enterprise data leveraging knowledge graphs. Actionable knowledge provides valuable insights, supports decision makers with clear interpretable arguments, and keeps its inference processes explainable. The benefits of employing actionable knowledge and its coherent management strategy span from a holistic semantic representation layer of enterprise data, i.e., representing numerous data sources as one, consistent, and integrated knowledge source, to unified interaction mechanisms with other systems that are able to effectively and efficiently leverage such an actionable knowledge. Several challenges have to be addressed on different conceptual levels pursuing this goal, i.e., means for representing knowledge, semantic data integration of raw data sources and subsequent knowledge extraction, communication interfaces, and implementation. In order to tackle those challenges we present the concept of Enterprise Knowledge Graphs (EKGs), describe their characteristics and advantages compared to existing approaches. We study each challenge with regard to using EKGs and demonstrate their efficiency. In particular, EKGs are able to reduce the semantic data integration effort when processing large-scale heterogeneous datasets. Then, having built a consistent logical integration layer with heterogeneity behind the scenes, EKGs unify query processing and enable effective communication interfaces for other enterprise systems. The achieved results allow us to conclude that strategies for managing linked enterprise data based on EKGs exhibit reasonable performance, comply with enterprise requirements, and ensure integrated data and knowledge management throughout its life cycle

    Tuple-based morphisms for interoperability establishment of financial information models

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e ComputadoresThe current financial crisis has demonstrated that there is a need for financial accounting data in a format which can be rapidly analyzed and exchanged. The appearance of XBRL in 2000 has helped create a ‘de facto’ standard data format for the exchange of financial information. However, XBRL by itself is not capable of ensuring a common semantic for the exchange of accounting information. Additionally, the existence of different accounting standards in different countries is a hindrance to efficient analysis and evaluation of companies by international analysts or investors. Therefore, there is a need to not only use a more advanced data format, but also for tools which can facilitate the exchange of accounting data, in particular when different accounting standards are used. This dissertation presents a tuple-based semantic and structural mapping for interoperability establishment of financial information models based on the use of ontologies and a ‘Communication Mediator’. It allows the mapping of accounting concepts of different accounting standards to be stored in the ‘Communication Mediator’. The mapping stored contains an ATL code expression, which with the aid of model transformation tools, can be utilized to perform the mapping between two different accounting models

    Graphical Database Architecture For Clinical Trials

    Get PDF
    The general area of the research is Health Informatics. The research focuses on creating an innovative and novel solution to manage and analyze clinical trials data. It constructs a Graphical Database Architecture (GDA) for Clinical Trials (CT) using New Technology for Java (Neo4j) as a robust, a scalable and a high-performance database. The purpose of the research project is to develop concepts and techniques based on architecture to accelerate the processing time of clinical data navigation at lower cost. The research design uses a positivist approach to empirical research. The research is significant because it proposes a new approach of clinical trials through graph theory and designs a responsive structure of clinical data that can be deployed across all the health informatics landscape. It uniquely contributes to scholarly literature of the phenomena of Not only SQL (NoSQL) graph databases, mainly Neo4j in CT, for future research of clinical informatics. A prototype is created and examined to validate the concepts, taking advantage of Neo4j’s high availability, scalability, and powerful graph query language (Cypher). This research study finds that integration of search methodologies and information retrieval with the graphical database provides a solid starting point to manage, query, and analyze the clinical trials data, furthermore the design and the development of a prototype demonstrate the conceptual model of this study. Likewise the proposed clinical trials ontology (CTO) incorporates all data elements of a standard clinical study which facilitate a heuristic overview of treatments, interventions, and outcome results of these studies

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects
    • …
    corecore