7,544 research outputs found

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    Change Mining in Adaptive Process Management Systems

    Get PDF
    The wide-spread adoption of process-aware information systems has resulted in a bulk of computerized information about real-world processes. This data can be utilized for process performance analysis as well as for process improvement. In this context process mining offers promising perspectives. So far, existing mining techniques have been applied to operational processes, i.e., knowledge is extracted from execution logs (process discovery), or execution logs are compared with some a-priori process model (conformance checking). However, execution logs only constitute one kind of data gathered during process enactment. In particular, adaptive processes provide additional information about process changes (e.g., ad-hoc changes of single process instances) which can be used to enable organizational learning. In this paper we present an approach for mining change logs in adaptive process management systems. The change process discovered through process mining provides an aggregated overview of all changes that happened so far. This, in turn, can serve as basis for all kinds of process improvement actions, e.g., it may trigger process redesign or better control mechanisms

    Designing Improved Sediment Transport Visualizations

    Get PDF
    Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects

    Verification of Agent-Based Artifact Systems

    Full text link
    Artifact systems are a novel paradigm for specifying and implementing business processes described in terms of interacting modules called artifacts. Artifacts consist of data and lifecycles, accounting respectively for the relational structure of the artifacts' states and their possible evolutions over time. In this paper we put forward artifact-centric multi-agent systems, a novel formalisation of artifact systems in the context of multi-agent systems operating on them. Differently from the usual process-based models of services, the semantics we give explicitly accounts for the data structures on which artifact systems are defined. We study the model checking problem for artifact-centric multi-agent systems against specifications written in a quantified version of temporal-epistemic logic expressing the knowledge of the agents in the exchange. We begin by noting that the problem is undecidable in general. We then identify two noteworthy restrictions, one syntactical and one semantical, that enable us to find bisimilar finite abstractions and therefore reduce the model checking problem to the instance on finite models. Under these assumptions we show that the model checking problem for these systems is EXPSPACE-complete. We then introduce artifact-centric programs, compact and declarative representations of the programs governing both the artifact system and the agents. We show that, while these in principle generate infinite-state systems, under natural conditions their verification problem can be solved on finite abstractions that can be effectively computed from the programs. Finally we exemplify the theoretical results of the paper through a mainstream procurement scenario from the artifact systems literature

    Personalized Training in Romanian SME’s ERP Implementation Projects

    Get PDF
    Many practitioners and IS researchers have stated that the overwhelming majority of Enter-prise Resource Planning (ERP) systems implementations exceed their training budget and their time allocations. In consequence many Romanian SMEs that implement an ERP system are looking to new approaches of knowledge transfer and performance support that are better aligned with business goals, deliver measurable results and are cost effective. Thus, we have begun to analyze the training methods used in ERP implementation in order to provide a so-lution that could help us maximize the efficiency of an ERP training program. We proposed a framework of an ERPTraining module that can be integrated with a Romanian ERP system and which provides a training management that is more personalized, effective and less ex-pensive.ERP Systems, Training Methods, Blended Learning

    Peer - Mediated Distributed Knowledge Management

    Get PDF
    Distributed Knowledge Management is an approach to knowledge management based on the principle that the multiplicity (and heterogeneity) of perspectives within complex organizations is not be viewed as an obstacle to knowledge exploitation, but rather as an opportunity that can foster innovation and creativity. Despite a wide agreement on this principle, most current KM systems are based on the idea that all perspectival aspects of knowledge should be eliminated in favor of an objective and general representation of knowledge. In this paper we propose a peer-to-peer architecture (called KEx), which embodies the principle above in a quite straightforward way: (i) each peer (called a K-peer) provides all the services needed to create and organize "local" knowledge from an individual's or a group's perspective, and (ii) social structures and protocols of meaning negotiation are introduced to achieve semantic coordination among autonomous peers (e.g., when searching documents from other K-peers). A first version of the system, called KEx, is imple-mented as a knowledge exchange level on top of JXTA

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page
    corecore