1,826 research outputs found

    A Distributed Solution to the PTE Problem

    Get PDF
    Proceeding of: AAAI Spring Symposium on Predictive Toxicology, AAAI Press, Stanford, March 1999A wide panoply of machine learning methods is available for application to the Predictive Toxicology Evaluation (PTE) problem. The authors have built four monolithic classification systems based on Tilde, Progol, C4.5 and naive bayesian classification. These systems have been trained using the PTE dataset, and their accuracy has been tested using the unseen PTE1 data set as test set. A Multi Agent Decision System (MADES) has been built using the aforementioned monolithic systems to build classification agents. The MADES was trained and tested with the same data sets used with the monolithic systems. Results show that the accuracy of the MADES improves the accuracies obtained by the monolithic systems. We believe that in most real world domains the combination of several approaches is stronger than the individuals. Introduction The Predictive Toxicology Evaluation (PTE) Challenge (Srinivasan et al. 1997) was devised by the Oxford University Computing Laboratory to test the suitability ...Publicad

    System-of-Systems Complexity

    Full text link
    The global availability of communication services makes it possible to interconnect independently developed systems, called constituent systems, to provide new synergistic services and more efficient economic processes. The characteristics of these new Systems-of-Systems are qualitatively different from the classic monolithic systems. In the first part of this presentation we elaborate on these differences, particularly with respect to the autonomy of the constituent systems, to dependability, continuous evolution, and emergence. In the second part we look at a SoS from the point of view of cognitive complexity. Cognitive complexity is seen as a relation between a model of an SoS and the observer. In order to understand the behavior of a large SoS we have to generate models of adequate simplicity, i.e, of a cognitive complexity that can be handled by the limited capabilities of the human mind. We will discuss the importance of properly specifying and placing the relied-upon message interfaces between the constituent systems that form an open SoS and discuss simplification strategies that help to reduce the cognitive complexity.Comment: In Proceedings AiSoS 2013, arXiv:1311.319

    From Monolithic Systems to Microservices: An Assessment Framework

    Get PDF
    Context. Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective. The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method. We designed this study with a mixed-methods approach combining a Systematic Mapping Study with a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results. We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information

    ILS Assessment: A Background Document

    Get PDF
    This document is intended as a first step in evaluating the current environment with respect to Integrated Library Systems (ILS). To date ILSs have been proprietary monolithic systems encompassing the major operations of the library: circulation, acquisitions, cataloguing and a public catalogue or OPAC

    Runtime Enforcement for Component-Based Systems

    Get PDF
    Runtime enforcement is an increasingly popular and effective dynamic validation technique aiming to ensure the correct runtime behavior (w.r.t. a formal specification) of systems using a so-called enforcement monitor. In this paper we introduce runtime enforcement of specifications on component-based systems (CBS) modeled in the BIP (Behavior, Interaction and Priority) framework. BIP is a powerful and expressive component-based framework for formal construction of heterogeneous systems. However, because of BIP expressiveness, it remains difficult to enforce at design-time complex behavioral properties. First we propose a theoretical runtime enforcement framework for CBS where we delineate a hierarchy of sets of enforceable properties (i.e., properties that can be enforced) according to the number of observational steps a system is allowed to deviate from the property (i.e., the notion of k-step enforceability). To ensure the observational equivalence between the correct executions of the initial system and the monitored system, we show that i) only stutter-invariant properties should be enforced on CBS with our monitors, ii) safety properties are 1-step enforceable. Given an abstract enforcement monitor (as a finite-state machine) for some 1-step enforceable specification, we formally instrument (at relevant locations) a given BIP system to integrate the monitor. At runtime, the monitor observes and automatically avoids any error in the behavior of the system w.r.t. the specification. Our approach is fully implemented in an available tool that we used to i) avoid deadlock occurrences on a dining philosophers benchmark, and ii) ensure the correct placement of robots on a map.Comment: arXiv admin note: text overlap with arXiv:1109.5505 by other author

    Extensible Technology-Agnostic Runtime Verification

    Full text link
    With numerous specialised technologies available to industry, it has become increasingly frequent for computer systems to be composed of heterogeneous components built over, and using, different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component systems. The approach is then applied to a case study of a component-based artefact using different technologies, namely C and Java.Comment: In Proceedings FESCA 2013, arXiv:1302.478

    Learning Design and Service Oriented Architectures:a mutual dependency?

    Get PDF
    This paper looks at how the concept of reusability has gained currency in e-learning. Initial attention was focused on reuse of content, but recently attention has focused on reusable software tools and reusable activity structures. The former has led to the proposal of service-oriented architectures, and the latter has seen the development of the Learning Design specification. The authors suggest that there is a mutual dependency between the success of these two approaches, as complex Learning Designs require the ability to call on a range of tools, while remaining technology neutral. The paper describes a project at the UK Open University, SLeD, which sought to develop a Learning Design player that would utilise the service-oriented approach. This acted both as a means of exploring some of the issues implicit within both approaches and also provided a practical tool. The SLeD system was successfully implemented in a different university, Liverpool Hope, demonstrating some of the principles of re-use
    • …
    corecore