19,406 research outputs found

    Local Type Checking for Linked Data Consumers

    Get PDF
    The Web of Linked Data is the cumulation of over a decade of work by the Web standards community in their effort to make data more Web-like. We provide an introduction to the Web of Linked Data from the perspective of a Web developer that would like to build an application using Linked Data. We identify a weakness in the development stack as being a lack of domain specific scripting languages for designing background processes that consume Linked Data. To address this weakness, we design a scripting language with a simple but appropriate type system. In our proposed architecture some data is consumed from sources outside of the control of the system and some data is held locally. Stronger type assumptions can be made about the local data than external data, hence our type system mixes static and dynamic typing. Throughout, we relate our work to the W3C recommendations that drive Linked Data, so our syntax is accessible to Web developers.Comment: In Proceedings WWV 2013, arXiv:1308.026

    ‘Unfair’ Discrimination in Two-sided Peering? Evidence from LINX

    Get PDF
    ‘Unfair’ Discrimination in Two-sided Peering? Evidence from LINX Abstract: Does asymmetry between Internet Providers affect the “fairness” of their interconnection contracts? While recent game theoretic literature provides contrasting answers to this question, there is a lack of empirical research. We introduce a novel dataset on micro-interconnection policies and provide an econometric analysis of the determinants of peering decisions amongst the Internet Service Providers interconnecting at the London Internet Exchange Point (LINX). Our key result shows that two different metrics, introduced to capture asymmetry, exert opposite effects. Asymmetry in “market size” enhances the quality of the link, while asymmetry in “network centrality” induces quality degradation, hence “unfairer” interconnection conditions

    Identifying the time profile of everyday activities in the home using smart meter data

    Get PDF
    Activities are a descriptive term for the common ways households spend their time. Examples include cooking, doing laundry, or socialising. Smart meter data can be used to generate time profiles of activities that are meaningful to households’ own lived experience. Activities are therefore a lens through which energy feedback to households can be made salient and understandable. This paper demonstrates a multi-step methodology for inferring hourly time profiles of ten household activities using smart meter data, supplemented by individual appliance plug monitors and environmental sensors. First, household interviews, video ethnography, and technology surveys are used to identify appliances and devices in the home, and their roles in specific activities. Second, ‘ontologies’ are developed to map out the relationships between activities and technologies in the home. One or more technologies may indicate the occurrence of certain activities. Third, data from smart meters, plug monitors and sensor data are collected. Smart meter data measuring aggregate electricity use are disaggregated and processed together with the plug monitor and sensor data to identify when and for how long different activities are occurring. Sensor data are particularly useful for activities that are not always associated with an energy-using device. Fourth, the ontologies are applied to the disaggregated data to make inferences on hourly time profiles of ten everyday activities. These include washing, doing laundry, watching TV (reliably inferred), and cleaning, socialising, working (inferred with uncertainties). Fifth, activity time diaries and structured interviews are used to validate both the ontologies and the inferred activity time profiles. Two case study homes are used to illustrate the methodology using data collected as part of a UK trial of smart home technologies. The methodology is demonstrated to produce reliable time profiles of a range of domestic activities that are meaningful to households. The methodology also emphasises the value of integrating coded interview and video ethnography data into both the development of the activity inference process

    Shape Expressions Schemas

    Full text link
    We present Shape Expressions (ShEx), an expressive schema language for RDF designed to provide a high-level, user friendly syntax with intuitive semantics. ShEx allows to describe the vocabulary and the structure of an RDF graph, and to constrain the allowed values for the properties of a node. It includes an algebraic grouping operator, a choice operator, cardinalitiy constraints for the number of allowed occurrences of a property, and negation. We define the semantics of the language and illustrate it with examples. We then present a validation algorithm that, given a node in an RDF graph and a constraint defined by the ShEx schema, allows to check whether the node satisfies that constraint. The algorithm outputs a proof that contains trivially verifiable associations of nodes and the constraints that they satisfy. The structure can be used for complex post-processing tasks, such as transforming the RDF graph to other graph or tree structures, verifying more complex constraints, or debugging (w.r.t. the schema). We also show the inherent difficulty of error identification of ShEx

    Supervised Blockmodelling

    Full text link
    Collective classification models attempt to improve classification performance by taking into account the class labels of related instances. However, they tend not to learn patterns of interactions between classes and/or make the assumption that instances of the same class link to each other (assortativity assumption). Blockmodels provide a solution to these issues, being capable of modelling assortative and disassortative interactions, and learning the pattern of interactions in the form of a summary network. The Supervised Blockmodel provides good classification performance using link structure alone, whilst simultaneously providing an interpretable summary of network interactions to allow a better understanding of the data. This work explores three variants of supervised blockmodels of varying complexity and tests them on four structurally different real world networks.Comment: Workshop on Collective Learning and Inference on Structured Data 201

    An ontology to standardize research output of nutritional epidemiology : from paper-based standards to linked content

    Get PDF
    Background: The use of linked data in the Semantic Web is a promising approach to add value to nutrition research. An ontology, which defines the logical relationships between well-defined taxonomic terms, enables linking and harmonizing research output. To enable the description of domain-specific output in nutritional epidemiology, we propose the Ontology for Nutritional Epidemiology (ONE) according to authoritative guidance for nutritional epidemiology. Methods: Firstly, a scoping review was conducted to identify existing ontology terms for reuse in ONE. Secondly, existing data standards and reporting guidelines for nutritional epidemiology were converted into an ontology. The terms used in the standards were summarized and listed separately in a taxonomic hierarchy. Thirdly, the ontologies of the nutritional epidemiologic standards, reporting guidelines, and the core concepts were gathered in ONE. Three case studies were included to illustrate potential applications: (i) annotation of existing manuscripts and data, (ii) ontology-based inference, and (iii) estimation of reporting completeness in a sample of nine manuscripts. Results: Ontologies for food and nutrition (n = 37), disease and specific population (n = 100), data description (n = 21), research description (n = 35), and supplementary (meta) data description (n = 44) were reviewed and listed. ONE consists of 339 classes: 79 new classes to describe data and 24 new classes to describe the content of manuscripts. Conclusion: ONE is a resource to automate data integration, searching, and browsing, and can be used to assess reporting completeness in nutritional epidemiology
    • 

    corecore