24,901 research outputs found

    The Ubiquitous Interactor - Device Independent Access to Mobile Services

    Full text link
    The Ubiquitous Interactor (UBI) addresses the problems of design and development that arise around services that need to be accessed from many different devices. In UBI, the same service can present itself with different user interfaces on different devices. This is done by separating interaction between users and services from presentation. The interaction is kept the same for all devices, and different presentation information is provided for different devices. This way, tailored user interfaces for many different devices can be created without multiplying development and maintenance work. In this paper we describe the system design of UBI, the system implementation, and two services implemented for the system: a calendar service and a stockbroker service

    Integrating web services into data intensive web sites

    Get PDF
    Designing web sites is a complex task. Ad-hoc rapid prototyping easily leads to unsatisfactory results, e.g. poor maintainability and extensibility. However, existing web design frameworks focus exclusively on data presentation: the development of specific functionalities is still achieved through low-level programming. In this paper we address this issue by describing our work on the integration of (semantic) web services into a web design framework, OntoWeaver. The resulting architecture, OntoWeaver-S, supports rapid prototyping of service centred data-intensive web sites, which allow access to remote web services. In particular, OntoWeaver-S is integrated with a comprehensive web service platform, IRS-II, for the specification, discovery, and execution of web services. Moreover, it employs a set of comprehensive site ontologies to model and represent all aspects of service-centred data-intensive web sites, and thus is able to offer high level support for the design and development process

    THE ROLE OF XML IN THE MODELING PROCESS OF A VIRTUAL BUSINESS

    Get PDF
    The aim of this paper is to describe the XML stack of languages used in the implementation process of a web application. This application is based on a three tier architecture named XRX. In this type of architecture there is no need for data model transformations between the tiers of the architecture like in the classical architecture. So the applications developed according XRX architecture become more flexible, efficient and simple.XML, XPath, XQuery, XSLT, XForms, XRX, UBL

    SFDL: MVC Applied to Workflow Design

    Get PDF
    Process management based on workflow systems is a growing trend in collaborative environments. One of the most notorious areas of improvement is that of user interfaces, especially since business process definition languages do not address efficiently the point of contact between workflow engines and human interactions. With that in focus, we propose the MVC pattern design to workflow systems. To accomplish this, we have designed a new dynamic view definition language called SFDL, oriented towards the easy interoperability with the different workflow definition languages, while maintaining enough flexibility to be represented in different formats and being adaptable to several environments. To validate our approach, we have carried out an implementation in a real banking scenario, which has provided continuous feedback and enabled us to refine the proposal. The work is fully based on widely accepted and used web standards (XML, YAML, JSON, Atom and REST). Some guidelines are given to facilitate the adoption of our solution

    DBpedia's triple pattern fragments: usage patterns and insights

    Get PDF
    Queryable Linked Data is published through several interfaces, including SPARQL endpoints and Linked Data documents. In October 2014, the DBpedia Association announced an official Triple Pattern Fragments interface to its popular DBpedia dataset. This interface proposes to improve the availability of live queryable data by dividing query execution between clients and servers. In this paper, we present a usage analysis between November 2014 and July 2015. In 9 months time, the interface had an average availability of 99.99 %, handling 16,776,170 requests, 43.0% of which were served from cache. These numbers provide promising evidence that low-cost Triple Pattern Fragments interfaces provide a viable strategy for live applications on top of public, queryable datasets

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    SWI-Prolog and the Web

    Get PDF
    Where Prolog is commonly seen as a component in a Web application that is either embedded or communicates using a proprietary protocol, we propose an architecture where Prolog communicates to other components in a Web application using the standard HTTP protocol. By avoiding embedding in external Web servers development and deployment become much easier. To support this architecture, in addition to the transfer protocol, we must also support parsing, representing and generating the key Web document types such as HTML, XML and RDF. This paper motivates the design decisions in the libraries and extensions to Prolog for handling Web documents and protocols. The design has been guided by the requirement to handle large documents efficiently. The described libraries support a wide range of Web applications ranging from HTML and XML documents to Semantic Web RDF processing. To appear in Theory and Practice of Logic Programming (TPLP)Comment: 31 pages, 24 figures and 2 tables. To appear in Theory and Practice of Logic Programming (TPLP
    corecore