5,720 research outputs found

    The pragmatic proof: hypermedia API composition and execution

    Get PDF
    Machine clients are increasingly making use of the Web to perform tasks. While Web services traditionally mimic remote procedure calling interfaces, a new generation of so-called hypermedia APIs works through hyperlinks and forms, in a way similar to how people browse the Web. This means that existing composition techniques, which determine a procedural plan upfront, are not sufficient to consume hypermedia APIs, which need to be navigated at runtime. Clients instead need a more dynamic plan that allows them to follow hyperlinks and use forms with a preset goal. Therefore, in this paper, we show how compositions of hypermedia APIs can be created by generic Semantic Web reasoners. This is achieved through the generation of a proof based on semantic descriptions of the APIs' functionality. To pragmatically verify the applicability of compositions, we introduce the notion of pre-execution and post-execution proofs. The runtime interaction between a client and a server is guided by proofs but driven by hypermedia, allowing the client to react to the application's actual state indicated by the server's response. We describe how to generate compositions from descriptions, discuss a computer-assisted process to generate descriptions, and verify reasoner performance on various composition tasks using a benchmark suite. The experimental results lead to the conclusion that proof-based consumption of hypermedia APIs is a feasible strategy at Web scale.Peer ReviewedPostprint (author's final draft

    Process-aware web programming with Jolie

    Full text link
    We extend the Jolie programming language to capture the native modelling of process-aware web information systems, i.e., web information systems based upon the execution of business processes. Our main contribution is to offer a unifying approach for the programming of distributed architectures on the web, which can capture web servers, stateful process execution, and the composition of services via mediation. We discuss applications of this approach through a series of examples that cover, e.g., static content serving, multiparty sessions, and the evolution of web systems. Finally, we present a performance evaluation that includes a comparison of Jolie-based web systems to other frameworks and a measurement of its scalability.Comment: IMADA-preprint-c

    Towards improving web service repositories through semantic web techniques

    Get PDF
    The success of the Web services technology has brought topicsas software reuse and discovery once again on the agenda of software engineers. While there are several efforts towards automating Web service discovery and composition, many developers still search for services via online Web service repositories and then combine them manually. However, from our analysis of these repositories, it yields that, unlike traditional software libraries, they rely on little metadata to support service discovery. We believe that the major cause is the difficulty of automatically deriving metadata that would describe rapidly changing Web service collections. In this paper, we discuss the major shortcomings of state of the art Web service repositories and, as a solution, we report on ongoing work and ideas on how to use techniques developed in the context of the Semantic Web (ontology learning, mapping, metadata based presentation) to improve the current situation

    Modelling data intensive web sites with OntoWeaver

    Get PDF
    This paper illustrates the OntoWeaver modelling approach, which relies on a set of comprehensive site ontologies to model all aspects of data intensive web sites and thus offers high level support for the design and development of data-intensive web sites. In particular, the OntoWeaver site ontologies comprise two components: a site view ontology and a presentation ontology. The site view ontology provides meta-models to allow for the composition of sophisticated site views, which allow end users to navigate and manipulate the underlying domain databases. The presentation ontology abstracts the look and feel for site views and makes it possible for the visual appearance and layout to be specified at a high level of abstractio

    Applying digital content management to support localisation

    Get PDF
    The retrieval and presentation of digital content such as that on the World Wide Web (WWW) is a substantial area of research. While recent years have seen huge expansion in the size of web-based archives that can be searched efficiently by commercial search engines, the presentation of potentially relevant content is still limited to ranked document lists represented by simple text snippets or image keyframe surrogates. There is expanding interest in techniques to personalise the presentation of content to improve the richness and effectiveness of the user experience. One of the most significant challenges to achieving this is the increasingly multilingual nature of this data, and the need to provide suitably localised responses to users based on this content. The Digital Content Management (DCM) track of the Centre for Next Generation Localisation (CNGL) is seeking to develop technologies to support advanced personalised access and presentation of information by combining elements from the existing research areas of Adaptive Hypermedia and Information Retrieval. The combination of these technologies is intended to produce significant improvements in the way users access information. We review key features of these technologies and introduce early ideas for how these technologies can support localisation and localised content before concluding with some impressions of future directions in DCM

    Why a reform of hosting providers' safe harbour is unnecessary under EU copyright law

    No full text
    In the context of its Digital Single Market Strategy (DSMS) the EU Commission is currently engaged in a discussion of whether the liability principles and rules envisaged by Directive 2000/31 (the Ecommerce Directive) should be amended. One of the principal concerns in relation to unlicensed online intermediaries (notably unlicensed hosting providers) is that these have been increasingly said to invoke the safe harbour immunities in the Ecommerce Directive lacking the conditions for their application. This alleged abuse has led to a distortion of the online marketplace and the resulting ‘value gap’ indicated by some rightholders.This contribution discusses a recent proposal advanced in France which asks to remove the safe harbour protection pursuant to Article 14 of the Ecommerce Directive for hosting providers that give access to copyright works.After addressing some of the points raised by the French proposal, this work concludes that the Court of Justice of the European Union (CJEU) has not erred in its interpretation of relevant provisions of the Ecommerce Directive and that – in practice – the removal of safe harbour protection for passive hosting providers that give access to copyright works would not provide any distinct advantages to rightholders. Overall, the current framework already provides an adequate degree of protection: what is required is a rigorous application by national courts of the principles enshrined in the Ecommerce Directive, as interpreted by the CJEU

    The Court of Justice of the European Union Creates an EU Law of Liability for Facilitation of Copyright Infringement: Observations on Brein v. Filmspeler [C-527/15] (2017) and Brein v. Ziggo [C-610/15] (2017)

    Get PDF
    After a series of decisions in which the Court of Justice of the European Union appeared to be cutting back on the application of the right of communication to the public with respect to the provision of hyperlinks, the Court’s most recent decisions in Brein v. Filmspeler (C-527/15) and Brein v. Ziggo (C-610/15) concerning, respectively, sale of a device pre-loaded with hyperlinks to illegal streaming sites, and The Pirate Bay BitTorrent platform, indicate instead that the Court’s prior caselaw was in fact gradually advancing toward a European harmonization of the law on derivative liability (i.e., liability in the second degree) for violation of the right of communication to the public. These two most recent decisions have now achieved that harmonization. Moreover, harmonization was necessary given both the lack of uniformity regarding secondary liability across the national laws of the member states, and the growing economic importance of furnishing the means to access infringing sources (without serving as the initial source of the infringing communication). This article will first briefly review of the facts of the cases. It then will examine how the Court’s reasoning results in a European law of communication to the public that reaches actors who do not originate illicit communications, but who knowingly facilitate them (I). Next, the analysis will show that the harmonized law of derivative liability can be considered the flip side of the law of non-liability for “the storage of information provided by a recipient of the service,... for the information stored at the request of a recipient of the service” already harmonized by art. 14 of the eCommerce directive 2000/31 (II). The article concludes with a brief postscript evoking some comparisons with U.S. copyright law
    • 

    corecore