561 research outputs found

    Your {JSON} is not my {JSON} : a case for more fine-grained content negotiation

    Get PDF
    Information resources can be expressed in different representations along many dimensions such as format, language, and time. Through content negotiation, http clients and servers can agree on which representation is most appropriate for a given piece of data. For instance, interactive clients typically indicate they prefer HTML, whereas automated clients would ask for JSON or RDF. However, labels such as “JSON” and “RDF” are insufficient to negotiate between the rich variety of possibilities offered by today’s languages and data models. This position paper argues that, despite widespread misuse, content negotiation remains the way forward. However, we need to extend it with more granular options in order to serve different current and future Web clients sustainably

    Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?

    Full text link
    In this paper, we present the first results of our ongoing early-stage research on a realtime disaster detection and monitoring tool. Based on Wikipedia, it is language-agnostic and leverages user-generated multimedia content shared on online social networking sites to help disaster responders prioritize their efforts. We make the tool and its source code publicly available as we make progress on it. Furthermore, we strive to publish detected disasters and accompanying multimedia content following the Linked Data principles to facilitate its wide consumption, redistribution, and evaluation of its usefulness.Comment: Accepted for publication at the AAAI Spring Symposium 2015: Structured Data for Humanitarian Technologies: Perfect fit or Overkill? #SD4HumTech1

    Public transit route planning through lightweight linked data interfaces

    Get PDF
    While some public transit data publishers only provide a data dump – which only few reusers can afford to integrate within their applications – others provide a use case limiting origin-destination route planning api. The Linked Connections framework instead introduces a hypermedia api, over which the extendable base route planning algorithm “Connections Scan Algorithm” can be implemented. We compare the cpu usage and query execution time of a traditional server-side route planner with the cpu time and query execution time of a Linked Connections interface by evaluating query mixes with increasing load. We found that, at the expense of a higher bandwidth consumption, more queries can be answered using the same hardware with the Linked Connections server interface than with an origin-destination api, thanks to an average cache hit rate of 78%. The findings from this research show a cost-efficient way of publishing transport data that can bring federated public transit route planning at the fingertips of anyone

    Continuous client-side query evaluation over dynamic linked data

    Get PDF
    Existing solutions to query dynamic Linked Data sources extend the SPARQL language, and require continuous server processing for each query. Traditional SPARQL endpoints already accept highly expressive queries, so extending these endpoints for time-sensitive queries increases the server cost even further. To make continuous querying over dynamic Linked Data more affordable, we extend the low-cost Triple Pattern Fragments (TPF) interface with support for time-sensitive queries. In this paper, we introduce the TPF Query Streamer that allows clients to evaluate SPARQL queries with continuously updating results. Our experiments indicate that this extension significantly lowers the server complexity, at the expense of an increase in the execution time per query. We prove that by moving the complexity of continuously evaluating queries over dynamic Linked Data to the clients and thus increasing bandwidth usage, the cost at the server side is significantly reduced. Our results show that this solution makes real-time querying more scalable for a large amount of concurrent clients when compared to the alternatives

    The Function Hub : an implementation-independent read/write function description repository

    Get PDF
    Functions are essential building blocks of any (computer) information system. However, development efforts to implement these functions are fragmented: a function has multiple implementations, each within a specific development context. Manual effort is needed handling various search interfaces and access methods to find the desired function, its metadata (if any), and associated implementations. This laborious process inhibits discovery, and thus reuse. Uniform, implementation-independent access is needed. We demo the Function Hub, available online at https://fno.io/hub: a Web application using a semantic interoperable model to map function descriptions to (multiple) implementations. The Function Hub allows editing and discovering function description metadata, and add information about alternative implementations. This way, the Function Hub enables users to discover relevant functions independently of their implementation, and to link to original published implementations
    • …
    corecore