29,180 research outputs found

    Metamodel-based model conformance and multiview consistency checking

    Get PDF
    Model-driven development, using languages such as UML and BON, often makes use of multiple diagrams (e.g., class and sequence diagrams) when modeling systems. These diagrams, presenting different views of a system of interest, may be inconsistent. A metamodel provides a unifying framework in which to ensure and check consistency, while at the same time providing the means to distinguish between valid and invalid models, that is, conformance. Two formal specifications of the metamodel for an object-oriented modeling language are presented, and it is shown how to use these specifications for model conformance and multiview consistency checking. Comparisons are made in terms of completeness and the level of automation each provide for checking multiview consistency and model conformance. The lessons learned from applying formal techniques to the problems of metamodeling, model conformance, and multiview consistency checking are summarized

    Verification and Validation of Semantic Annotations

    Full text link
    In this paper, we propose a framework to perform verification and validation of semantically annotated data. The annotations, extracted from websites, are verified against the schema.org vocabulary and Domain Specifications to ensure the syntactic correctness and completeness of the annotations. The Domain Specifications allow checking the compliance of annotations against corresponding domain-specific constraints. The validation mechanism will detect errors and inconsistencies between the content of the analyzed schema.org annotations and the content of the web pages where the annotations were found.Comment: Accepted for the A.P. Ershov Informatics Conference 2019(the PSI Conference Series, 12th edition) proceedin

    The APM Bright Galaxy Catalogue

    Get PDF
    The APM Bright Galaxy Catalogue lists positions, magnitudes, shapes and morphological types for 14,681 galaxies brighter than bJb_J magnitude 16.44 over a 4,180 square degree area of the southern sky. Galaxy and stellar images have been located from glass copy plates of the United Kingdom Schmidt Telescope (UKST) IIIaJ sky survey using the Automated Photographic Measuring (APM) facility in Cambridge, England. The majority of stellar images are rejected by the regularity of their image surface brightness profiles. Remaining images are inspected by eye on film copies of the survey material and classed as stellar, multiple stellar, galaxy, merger or noise. Galaxies are further classified as elliptical, lenticular, spiral, irregular or uncertain. The 180 survey fields are put onto a uniform photometric system by comparing the magnitudes of galaxies in the overlap regions between neighbouring plates. The magnitude zero-point, photometric uniformity and photographic saturation are checked with CCD photometry. Finally, the completeness and reliability of the catalogue is assessed using various internal tests and by comparing with several independently constructed galaxy catalogues.Comment: 52 pages, uuencoded, gzipped tar archive, includes all figures. The catalogue data is available from ftp://adc.gsfc.nasa.gov/pub/adc/archives/journal_tables/MNRAS/278/1025

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    Semantics of trace relations in requirements models for consistency checking and inferencing

    Get PDF
    Requirements traceability is the ability to relate requirements back to stakeholders and forward to corresponding design artifacts, code, and test cases. Although considerable research has been devoted to relating requirements in both forward and backward directions, less attention has been paid to relating requirements with other requirements. Relations between requirements influence a number of activities during software development such as consistency checking and change management. In most approaches and tools, there is a lack of precise definition of requirements relations. In this respect, deficient results may be produced. In this paper, we aim at formal definitions of the relation types in order to enable reasoning about requirements relations. We give a requirements metamodel with commonly used relation types. The semantics of the relations is provided with a formalization in first-order logic. We use the formalization for consistency checking of relations and for inferring new relations. A tool has been built to support both reasoning activities. We illustrate our approach in an example which shows that the formal semantics of relation types enables new relations to be inferred and contradicting relations in requirements documents to be determined. The application of requirements reasoning based on formal semantics resolves many of the deficiencies observed in other approaches. Our tool supports better understanding of dependencies between requirements

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems

    Analyzing the Tagging Quality of the Spanish OpenStreetMap

    Get PDF
    In this paper, a framework for the assessment of the quality of OpenStreetMap is presented, comprising a batch of methods to analyze the quality of entity tagging. The approach uses Taginfo as a reference base and analyses quality measures such as completeness, compliance, consistence, granularity, richness and trust . The framework has been used to analyze the quality of OpenStreetMap in Spain, comparing the main cities of Spain. Also a comparison between Spain and some major European cities has been carried out. Additionally, a Web tool has been also developed in order to facilitate the same kind of analysis in any area of the world

    A catalogue quality audit tool

    Get PDF
    The current need for performance measurement and quality targets for services to users requires suitable performance indicators for libraries to use. This paper looks at the self-assessment audit tool for catalogue quality developed by UKOLN in collaboration with Essex libraries. For the tool a checklist of errors was drawn up, which can then be used to assess the quality of records within a catalogue using a sample of library stock. The tool can be used to assess the quality of catalogue records for monographs and non-book materials (but not serials), for complete collections or parts of collections and for records created at different periods. This paper describes the tool and the process of making the assessment and reports on the results of the pilot study carried out at the University of Bath Library in 2000
    • 

    corecore