5 research outputs found

    How To Build Enterprise Data Models To Achieve Compliance To Standards Or Regulatory Requirements (and share data).

    Get PDF
    Sharing data between organizations is challenging because it is difficult to ensure that those consuming the data accurately interpret it. The promise of the next generation WWW, the semantic Web, is that semantics about shared data will be represented in ontologies and available for automatic and accurate machine processing of data. Thus, there is inter-organizational business value in developing applications that have ontology-based enterprise models at their core. In an ontology-based enterprise model, business rules and definitions are represented as formal axioms, which are applied to enterprise facts to automatically infer facts not explicitly represented. If the proposition to be inferred is a requirement from, say, ISO 9000 or Sarbanes-Oxley, inference constitutes a model-based proof of compliance. In this paper, we detail the development and application of the TOVE ISO 9000 Micro-Theory, a model of ISO 9000 developed using ontologies for quality management (measurement, traceability, and quality management system ontologies). In so doing, we demonstrate that when enterprise models are developed using ontologies, they can be leveraged to support business analytics problems - in particular, compliance evaluation - and are sharable

    Big Data Research in Information Systems: Toward an Inclusive Research Agenda

    Get PDF
    Big data has received considerable attention from the information systems (IS) discipline over the past few years, with several recent commentaries, editorials, and special issue introductions on the topic appearing in leading IS outlets. These papers present varying perspectives on promising big data research topics and highlight some of the challenges that big data poses. In this editorial, we synthesize and contribute further to this discourse. We offer a first step toward an inclusive big data research agenda for IS by focusing on the interplay between big data’s characteristics, the information value chain encompassing people-process-technology, and the three dominant IS research traditions (behavioral, design, and economics of IS). We view big data as a disruption to the value chain that has widespread impacts, which include but are not limited to changing the way academics conduct scholarly work. Importantly, we critically discuss the opportunities and challenges for behavioral, design science, and economics of IS research and the emerging implications for theory and methodology arising due to big data’s disruptive effects

    Semi-Automated Development of Conceptual Models from Natural Language Text

    Get PDF
    The process of converting natural language specifications into conceptual models requires detailed analysis of natural language text, and designers frequently make mistakes when undertaking this transformation manually. Although many approaches have been used to help designers translate natural language text into conceptual models, each approach has its limitations. One of the main limitations is the lack of a domain-independent ontology that can be used as a repository for entities and relationships, thus guiding the transition from natural language processing into a conceptual model. Such an ontology is not currently available because it would be very difficult and time consuming to produce. In this thesis, a semi-automated system for mapping natural language text into conceptual models is proposed. The model, which is called SACMES, combines a linguistic approach with an ontological approach and human intervention to achieve the task. The model learns from the natural language specifications that it processes, and stores the information that is learnt in a conceptual model ontology and a user history knowledge database. It then uses the stored information to improve performance and reduce the need for human intervention. The evaluation conducted on SACMES demonstrates that (1) designers’ creation of conceptual models is improved when using the system comparing with not using any system, and that (2) the performance of the system is improved by processing more natural language requirements, and thus, the need for human intervention has decreased. However, these advantages may be improved further through development of the learning and retrieval techniques used by the system

    Semi-automatic conceptual data modeling using entity and relationship instance repositories

    Get PDF
    Conceptual modeling is the foundation of analysis and design methodologies for the development of information systems. It is challenging because it requires a clear understanding of an application domain and an ability to translate the requirement specifications into a data model. However, novice designers frequently lack experience and have incomplete knowledge about the application being designed. We propose new types of reusable artifacts called Entity Instance Repository (EIR) and Relationship Instance Repository (RIR), which contain ER (Entity-Relationship) modeling patterns from prior designs and serve as knowledge-based repositories for conceptual modeling. We also select six data modeling rules to check whether they are comprehensive enough in creating quality conceptual models. This research aims to develop effective knowledge-based systems (KBSs) with EIR and RIR. Our proposed artifacts are likely to be useful for conceptual designs in the following aspects: (1) they contain knowledge about a domain; (2) automatic generation of EIR and RIR overcomes a major problem ofinefficient manual approaches that depend on experienced modeling designers and domain experts; and (3) they are domain-specific and therefore easier to understand and reuse. Two KBSs were developed in this study: Heuristic-Based Technique (HBT) and Entity Instance Pattern WordNet (EIPW). The goals of this study are (1) to find effective approaches that can improve the novice designers’ performance in developing conceptual models by integrating pattern-based technique and various modeling techniques, (2) to evaluate whether those selected six modeling rules are effective in HBT, and (3) to validate whether the proposed KBSs are effective in creating quality conceptual models. To assess the potential of the KBSs to benefit practice, empirical testing was conductedon tasks of different sizes. The empirical results indicate that novice designers’ overall performance increased by 30.9~46.0 % when using EIPW, and increased by 33.5~34.9% when using HBT, compared with the cases of no tools.Ph.D., Information Studies -- Drexel University, 201
    corecore