27,242 research outputs found

    Exact Gap Computation for Code Coverage Metrics in ISO-C

    Full text link
    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582

    The 'active ingredients' for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis

    Get PDF
    AIMS: To explore which conditions of community engagement are implicated in effective interventions targeting disadvantaged pregnant women and new mothers. BACKGROUND: Adaptive experiences during pregnancy and the early years are key to reducing health inequalities in women and children worldwide. Public health nurses, health visitors and community midwives are well placed to address such disadvantage, often using community engagement strategies. Such interventions are complex; however, and we need to better understand which aspects of community engagement are aligned with effectiveness. DESIGN: Qualitative comparative analysis conducted in 2013, of trials data included in a recently published systematic review. METHODS: Two reviewers agreed on relevant conditions from 24 maternity or early years intervention studies examining four models of community engagement. Effect size estimates were converted into 'fuzzy' effectiveness categories and truth tables were constructed. Using fsQCA software, Boolean minimization identified solution sets. Random effects multiple regression and fsQCA were conducted to rule out risk of methodological bias. RESULTS/FINDINGS: Studies focused on antenatal, immunization, breastfeeding and early professional intervention outcomes. Peer delivery (consistency 0·83; unique coverage 0·63); and mother-professional collaboration (consistency 0·833; unique coverage 0·21) were moderately aligned with effective interventions. Community-identified health need plus consultation/collaboration in intervention design and leading on delivery were weakly aligned with 'not effective' interventions (consistency 0·78; unique coverage 0·29). CONCLUSIONS: For disadvantaged new and expectant mothers, peer or collaborative delivery models could be used in interventions. A need exists to design and test community engagement interventions in other areas of maternity and early years care and to further evaluate models of empowerment

    Level sets estimation and Vorob'ev expectation of random compact sets

    Full text link
    The issue of a "mean shape" of a random set XX often arises, in particular in image analysis and pattern detection. There is no canonical definition but one possible approach is the so-called Vorob'ev expectation \E_V(X), which is closely linked to quantile sets. In this paper, we propose a consistent and ready to use estimator of \E_V(X) built from independent copies of XX with spatial discretization. The control of discretization errors is handled with a mild regularity assumption on the boundary of XX: a not too large 'box counting' dimension. Some examples are developed and an application to cosmological data is presented

    Inductive Logic Programming in Databases: from Datalog to DL+log

    Full text link
    In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constraints, for a database whose schema is represented also by means of an ontology. Both can be reformulated as ILP problems and can benefit from the expressive and deductive power of the KR framework DL+log. We illustrate the application scenarios by means of examples. Keywords: Inductive Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid Knowledge Representation and Reasoning Systems. Note: To appear in Theory and Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables

    Metamodel Instance Generation: A systematic literature review

    Get PDF
    Modelling and thus metamodelling have become increasingly important in Software Engineering through the use of Model Driven Engineering. In this paper we present a systematic literature review of instance generation techniques for metamodels, i.e. the process of automatically generating models from a given metamodel. We start by presenting a set of research questions that our review is intended to answer. We then identify the main topics that are related to metamodel instance generation techniques, and use these to initiate our literature search. This search resulted in the identification of 34 key papers in the area, and each of these is reviewed here and discussed in detail. The outcome is that we are able to identify a knowledge gap in this field, and we offer suggestions as to some potential directions for future research.Comment: 25 page

    Optimal Geographic Caching In Cellular Networks

    Get PDF
    In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy "cache the most popular content, everywhere". In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.Comment: 6 pages, 6 figures, conferenc
    corecore