51,715 research outputs found

    Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes

    Get PDF
    Abstract—Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to measure quality of services and to estimate impacts on the quality of processes, we focus on measurable metrics related to functional elements of processes, services as well as components implementing services. Based on functional dependencies between processes and services of a service-oriented architecture (SOA), we define metric dependencies for monitoring the impact of quality of invoked services on quality of affected processes. In this paper we discuss how to derive metric dependency definitions from functional dependencies by applying dependency patterns, and how to map metric and metric dependency definitions to an appropriate monitoring architecture. I

    A framework for the definition of metrics for actor-dependency models

    Get PDF
    Actor-dependency models are a formalism aimed at providing intentional descriptions of processes as a network of dependency relationships among actors. This kind of models is currently widely used in the early phase of requirements engineering as well as in other contexts such as organizational analysis and business process reengineering. In this paper, we are interested in the definition of a framework for the formulation of metrics over these models. These metrics are used to analyse the models with respect to some properties that are interesting for the system being modelled, such as security, efficiency or accuracy. The metrics are defined in terms of the actors and dependencies of the model. We distinguish three different kinds of metrics that are formally defined, and then we apply the framework at two different layers of a meeting scheduler system.Postprint (published version

    Static Analysis of Functional Programs

    Get PDF
    In this paper, the static analysis of programs in the functional programming language Miranda* is described based on two graph models. A new control-flow graph model of Miranda definitions is presented, and a model with four classes of callgraphs. Standard software metrics are applicable to these models. A Miranda front end for Prometrix, ¿, a tool for the automated analysis of flowgraphs and callgraphs, has been developed. This front end produces the flowgraph and callgraph representations of Miranda programs. Some features of the metric analyser are illustrated with an example program. The tool provides a promising access to standard metrics on functional programs

    Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud

    Get PDF
    This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.\u

    Using F-structures in machine translation evaluation

    Get PDF
    Despite a growing interest in automatic evaluation methods for Machine Translation (MT) quality, most existing automatic metrics are still limited to surface comparison of translation and reference strings. In this paper we show how Lexical-Functional Grammar (LFG) labelled dependencies obtained from an automatic parse can be used to assess the quality of MT on a deeper linguistic level, giving as a result higher correlations with human judgements

    Efficient Discovery of Ontology Functional Dependencies

    Full text link
    Poor data quality has become a pervasive issue due to the increasing complexity and size of modern datasets. Constraint based data cleaning techniques rely on integrity constraints as a benchmark to identify and correct errors. Data values that do not satisfy the given set of constraints are flagged as dirty, and data updates are made to re-align the data and the constraints. However, many errors often require user input to resolve due to domain expertise defining specific terminology and relationships. For example, in pharmaceuticals, 'Advil' \emph{is-a} brand name for 'ibuprofen' that can be captured in a pharmaceutical ontology. While functional dependencies (FDs) have traditionally been used in existing data cleaning solutions to model syntactic equivalence, they are not able to model broader relationships (e.g., is-a) defined by an ontology. In this paper, we take a first step towards extending the set of data quality constraints used in data cleaning by defining and discovering \emph{Ontology Functional Dependencies} (OFDs). We lay out theoretical and practical foundations for OFDs, including a set of sound and complete axioms, and a linear inference procedure. We then develop effective algorithms for discovering OFDs, and a set of optimizations that efficiently prune the search space. Our experimental evaluation using real data show the scalability and accuracy of our algorithms.Comment: 12 page

    Evaluating evaluation measures

    Get PDF
    This paper presents a thorough examination of the validity of three evaluation measures on parser output. We assess parser performance of an unlexicalised probabilistic parser trained on two German treebanks with different annotation schemes and evaluate parsing results using the PARSEVAL metric, the Leaf-Ancestor metric and a dependency-based evaluation. We reject the claim that the T¨uBa-D/Z annotation scheme is more adequate then the TIGER scheme for PCFG parsing and show that PARSEVAL should not be used to compare parser performance for parsers trained on treebanks with different annotation schemes. An analysis of specific error types indicates that the dependency-based evaluation is most appropriate to reflect parse quality
    corecore