13,624 research outputs found

    Knowledge-Intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches

    Get PDF
    Engineering of knowledge-intensive processes (KiPs) is far from being mastered, since they are genuinely knowledge- and data-centric, and require substantial flexibility, at both design- and run-time. In this work, starting from a scientific literature analysis in the area of KiPs and from three real-world domains and application scenarios, we provide a precise characterization of KiPs. Furthermore, we devise some general requirements related to KiPs management and execution. Such requirements contribute to the definition of an evaluation framework to assess current system support for KiPs. To this end, we present a critical analysis on a number of existing process-oriented approaches by discussing their efficacy against the requirements

    Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior

    Full text link
    This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic causal model for predicting the behavior generated by modern percept-driven robot plans. PHAMs represent aspects of robot behavior that cannot be represented by most action models used in AI planning: the temporal structure of continuous control processes, their non-deterministic effects, several modes of their interferences, and the achievement of triggering conditions in closed-loop robot plans. The main contributions of this article are: (1) PHAMs, a model of concurrent percept-driven behavior, its formalization, and proofs that the model generates probably, qualitatively accurate predictions; and (2) a resource-efficient inference method for PHAMs based on sampling projections from probabilistic action models and state descriptions. We show how PHAMs can be applied to planning the course of action of an autonomous robot office courier based on analytical and experimental results

    W-NINE: a two-stage emulation platform for mobile and wireless systems

    Get PDF
    More and more applications and protocols are now running on wireless networks. Testing the implementation of such applications and protocols is a real challenge as the position of the mobile terminals and environmental effects strongly affect the overall performance. Network emulation is often perceived as a good trade-off between experiments on operational wireless networks and discrete-event simulations on Opnet or ns-2. However, ensuring repeatability and realism in network emulation while taking into account mobility in a wireless environment is very difficult. This paper proposes a network emulation platform, called W-NINE, based on off-line computations preceding online pattern-based traffic shaping. The underlying concepts of repeatability, dynamicity, accuracy and realism are defined in the emulation context. Two different simple case studies illustrate the validity of our approach with respect to these concepts

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Active Ontology: An Information Integration Approach for Dynamic Information Sources

    Get PDF
    In this paper we describe an ontology-based information integration approach that is suitable for highly dynamic distributed information sources, such as those available in Grid systems. The main challenges addressed are: 1) information changes frequently and information requests have to be answered quickly in order to provide up-to-date information; and 2) the most suitable information sources have to be selected from a set of different distributed ones that can provide the information needed. To deal with the first challenge we use an information cache that works with an update-on-demand policy. To deal with the second we add an information source selection step to the usual architecture used for ontology-based information integration. To illustrate our approach, we have developed an information service that aggregates metadata available in hundreds of information services of the EGEE Grid infrastructure

    DevOps Continuous Integration: Moving Germany’s Federal Employment Agency Test System into Embedded In-Memory Technology

    Get PDF
    This paper describes the development of a continuous integration database test architecture for a highly important and large software application in the public sector in Germany. We apply action design research and draw from two emerging areas of research – DevOps continuous integration practices and in-memory database development – to define the problem, design, build and implement the solution, analyze challenges encountered, and make adjustments. The result is the transformation of a large test environment originally based on Oracle databases into a flexible and fast embedded in-memory architecture. The main challenges involved overcoming the differences between the SQL specifications supported by the development and production systems and optimizing the test runtime performance. The paper contributes to theory and practice by presenting one of the first studies showing a real-world implementation of a successful database test architecture that enables continuous integration, and identifying technical design principles for database test architectures in general
    corecore