6,748 research outputs found

    Towards Next Generation Business Process Model Repositories – A Technical Perspective on Loading and Processing of Process Models

    Get PDF
    Business process management repositories manage large collections of process models ranging in the thousands. Additionally, they provide management functions like e.g. mining, querying, merging and variants management for process models. However, most current business process management repositories are built on top of relation database management systems (RDBMS) although this leads to performance issues. These issues result from the relational algebra, the mismatch between relational tables and object oriented programming (impedance mismatch) as well as new technological developments in the last 30 years as e.g. more and cheap disk and memory space, clusters and clouds. The goal of this paper is to present current paradigms to overcome the performance problems inherent in RDBMS. Therefore, we have to fuse research about data modeling along database technologies as well as algorithm design and parallelization for the technology paradigms occurring nowadays. Based on these research streams we have shown how the performance of business process management repositories could be improved in terms of loading performance of processes (from e.g. a disk) and the computation of management techniques resulting in even faster application of such a technique. Exemplarily, applications of the compiled paradigms are presented to show their applicability

    Beyond OAIS : towards a reliable and consistent digital preservation implementation framework

    Get PDF
    Current work in digital preservation (DP) is dominated by the "Open Archival Information System" (OAIS) reference framework specified by the international standard ISO 14721:2003. This is a useful aid to understanding the concepts, main functional components and the basic data flows within a DP system, but does not give specific guidance on implementation-level issues. In this paper we suggest that there is a need for a reference architecture which goes beyond OAIS to address such implementationlevel issues - to specify minimum requirements in respect of the policies, processes, and metadata required to measure and validate repository trustworthiness in respect of the authenticity, integrity, renderability, meaning, and retrievability of the digital materials preserved. The suggestion is not that a particular way of implementing OAIS be specified, but, rather that general guidelines on implementation are required if the term 'OAIS-compliant' is to be meaningful in the sense of giving an assurance of attaining and maintaining an operationally adequate or better level of long-term reliability, consistency, and crosscompatibility in implemented DP systems that is measurable, verifiable, manageable, and (as far as possible) futureproofed

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Managing Metadata in Data Warehouses: Pitfalls and Possibilities

    Get PDF
    This paper motivates a comprehensive academic study of metadata and the roles that metadata plays in organizational information systems. While the benefits of metadata and challenges in implementing metadata solutions are widely addressed in practitioner publications, explicit discussion of metadata in academic literature is rare. Metadata, when discussed, is perceived primarily as a technology solution. Integrated management of metadata and its business value are not well addressed. This paper discusses both the benefits offered by and the challenges associated with integrating metadata. It also describes solutions for addressing some of these challenges. The inherent complexity of an integrated metadata repository is demonstrated by reviewing the metadata functionality required in a data warehouse: a decision support environment where its importance is acknowledged. Comparing this required functionality with metadata management functionalities offered by data warehousing software products identifies crucial gaps. Based on these analyses, topics for further research on metadata are proposed

    Digital Preservation Services : State of the Art Analysis

    Get PDF
    Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe

    Semantic technologies: from niche to the mainstream of Web 3? A comprehensive framework for web Information modelling and semantic annotation

    Get PDF
    Context: Web information technologies developed and applied in the last decade have considerably changed the way web applications operate and have revolutionised information management and knowledge discovery. Social technologies, user-generated classification schemes and formal semantics have a far-reaching sphere of influence. They promote collective intelligence, support interoperability, enhance sustainability and instigate innovation. Contribution: The research carried out and consequent publications follow the various paradigms of semantic technologies, assess each approach, evaluate its efficiency, identify the challenges involved and propose a comprehensive framework for web information modelling and semantic annotation, which is the thesis’ original contribution to knowledge. The proposed framework assists web information modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality. Implications: Semantic technologies coupled with social media and end-user involvement can instigate innovative influence with wide organisational implications that can benefit a considerable range of industries. The scalable and sustainable business models of social computing and the collective intelligence of organisational social media can be resourcefully paired with internal research and knowledge from interoperable information repositories, back-end databases and legacy systems. Semantified information assets can free human resources so that they can be used to better serve business development, support innovation and increase productivity

    Knowledge Reuse Through Electronic Knowledge Repositories: An Empirical Study And Ontological Improvement Effort For The Manufacturing Industry

    Get PDF
    Knowledge management adoption is growing, and will continue to grow in no small part because of its recent inclusion into the ISO 9001 quality standard. As organizations look towards ways in which to manage their knowledge, the codification of explicit knowledge through Knowledge Management Systems (KMS) and Electronic Knowledge Repositories (EKRs) will undoubtedly gain more interest. An EKR is a form of KMS that emphasizes the codification and storage of organizational expertise for the purposes of Knowledge Reuse (KRU). Unfortunately, the factors surrounding KRU are not well understood. While previous studies have viewed EKR usage from a narrow perspective, a broader and interconnected view of KRU via EKRs has yet to emerge. Additionally, while there have been numerous benefits linked to EKRs, there are still issues that limit their utility, particularly in the manufacturing arena where information complexity and geography have made it increasingly difficult to share knowledge. Hence, this research employed a two pronged approach. First, using a multi-theoretical perspective to model KRU via EKRs, a quantitative study was conducted and identified several socio-technical factors that predicted greater KRU. These factors had not been previously modeled within the context of KRU via EKRs, and hence add to both the theoretical and practical implications of the domain. Additionally, the KRU construct was also tied to a back end resulting outcome view that was informed by the Expectation Confirmation Model (ECM). Through this view, the research quantitatively validated that KRU not only predicted greater performance, but also impacted greater knowledge sharing and continuance of use. This ancillary benefit helps to reinforce the importance of EKRs in that additional gains are manifested along with the core component of KRU. Second, the research extended the capability of manufacturing EKRs by developing a holistic design and process based ontology that connects key concepts within these domains to provide an overall interconnected view. Additionally, to ensure the relevance of the ontology, a mature and globally recognized industry standard was used as the basis to develop it. The ontology was then formalized and tested via Semantic Web tools: Protege, RDF, and SPARQL. The results demonstrate an improved approach to knowledge recall by providing rich and accurate query returns. The ability to use standalone and federated queries to effectively cull the complexity of this interconnected domain is an enhancement to keyword based and traditional relational database approaches. Additionally, to assist with greater industry adoption a systematic and constructive approach for developing and operationalizing the ontology is provided. Finally, in the spirit of the program in which this dissertation is presented, rounding out the research effort are broader organizational management recommendations for overall knowledge management. Referencing industry targeted literature and syncing them with findings from these two research efforts, several pragmatic and sequentially logical approaches to knowledge management are offered

    Enhancing Analysts’ Mental Models for Improving Requirements Elicitation: A Two-stage Theoretical Framework and Empirical Results

    Get PDF
    Research has extensively documented the importance of accurate system requirements in avoiding project delays, cost overruns, and system malfunctions. Requirement elicitation (RE) is a critical step in determining system requirements. While much research on RE has emerged, a deeper understanding of three aspects could help significantly improve RE: 1) insights about the role and impacts of support tools in the RE process, 2) the impact of using support tools in multiple stages of the RE process, and 3) a clear focus on the multiplicity of perspectives in assessing RE outcomes. To understand how using support tools could improve RE, we rely on the theoretical lens of mental models (MM) to develop a dynamic conceptual model and argue that analysts form mental models (MMs) of the system during RE and these MMs impact their outcome performance. We posit that one can enhance analysts’ MMs by using a knowledge-based repository (KBR) of components and services embodying domain knowledge specific to the target application during two key stages of RE, which results in improved RE outcomes. We measured the RE outcomes from user and analyst perspectives. The knowledge-based component repository we used in this research (which we developed in collaboration with a multi-national company) focused on insurance claim processing. The repository served as the support tool in RE in a multi-period lab experiment with multiple teams of analysts. The results supported the conceptualized model and showed the significant impacts of such tools in supporting analysts and their performance outcomes at two stages of RE. This work makes multiple contributions: it offers a theoretical framework for understanding and enhancing the RE process, develops measures for analysts’ mental models and RE performance outcomes, and shows the process by which one can improve analysts’ RE performance through access to a KBR of components at two key stages of the RE process

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
    corecore