230 research outputs found

    Web service composition: A survey of techniques and tools

    Get PDF
    Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective

    Data integration support for mashups

    Get PDF
    Mashups are a new type of interactive web applications, combining content from multiple services or sources at run-time. While many such mashups are being developed most of them support rather simple data integration tasks. We therefore propose a framework for the development of more complex dynamic data integration mashups. The framework consists of components for query generation and online matching as well as for additional data transformation. Our architecture supports interactive and sequential result re-finement to improve the quality of the presented result step-by-step by executing more elaborate queries when neces-sary. A script-based definition of mashups facilitates the de-velopment as well as the dynamic execution of mashups. We illustrate our approach by a powerful mashup imple-mentation combining bibliographic data to dynamically cal-culate citation counts for venues and authors

    Service composition for end-users

    Get PDF
    RESTful services are becoming a popular technology for providing and consuming cloud services. The idea of cloud computing is based on on-demand services and their agile usage. This implies that also personal service compositions and workflows should be supported. Some approaches for RESTful service compositions have been proposed. In practice, such compositions typically present mashup applications, which are composed in an ad-hoc manner. In addition, such approaches and tools are mainly targeted for programmers rather than end-users. In this paper, a user-driven approach for reusable RESTful service compositions is presented. Such compositions can be executed once or they can be configured to be executed repeatedly, for example, to get newest updates from a service once a week

    Proceedings of the First International Workshop on Mashup Personal Learning Environments

    Get PDF
    Wild, F., Kalz, M., & Palmér, M. (Eds.) (2008). Proceedings of the First International Workshop on Mashup Personal Learning Environments (MUPPLE08). September, 17, 2008, Maastricht, The Netherlands: CEUR Workshop Proceedings, ISSN 1613-0073. Available at http://ceur-ws.org/Vol-388.The work on this publication has been sponsored by the TENCompetence Integrated Project (funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org]) and partly sponsored by the LTfLL project (funded by the European Commission's 7th Framework Programme, priority ISCT. Contract 212578 [http://www.ltfll-project.org

    Supporting End-User Development through a New Composition Model: An Empirical Study

    Get PDF
    End-user development (EUD) is much hyped, and its impact has outstripped even the most optimistic forecasts. Even so, the vision of end users programming their own solutions has not yet materialized. This will continue to be so unless we in both industry and the research community set ourselves the ambitious challenge of devising end to end an end-user application development model for developing a new age of EUD tools. We have embarked on this venture, and this paper presents the main insights and outcomes of our research and development efforts as part of a number of successful EU research projects. Our proposal not only aims to reshape software engineering to meet the needs of EUD but also to refashion its components as solution building blocks instead of programs and software developments. This way, end users will really be empowered to build solutions based on artefacts akin to their expertise and understanding of ideal solution

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Programming patterns and development guidelines for Semantic Sensor Grids (SemSorGrid4Env)

    No full text
    The web of Linked Data holds great potential for the creation of semantic applications that can combine self-describing structured data from many sources including sensor networks. Such applications build upon the success of an earlier generation of 'rapidly developed' applications that utilised RESTful APIs. This deliverable details experience, best practice, and design patterns for developing high-level web-based APIs in support of semantic web applications and mashups for sensor grids. Its main contributions are a proposal for combining Linked Data with RESTful application development summarised through a set of design principles; and the application of these design principles to Semantic Sensor Grids through the development of a High-Level API for Observations. These are supported by implementations of the High-Level API for Observations in software, and example semantic mashups that utilise the API

    DevOps Continuous Integration: Moving Germany’s Federal Employment Agency Test System into Embedded In-Memory Technology

    Get PDF
    This paper describes the development of a continuous integration database test architecture for a highly important and large software application in the public sector in Germany. We apply action design research and draw from two emerging areas of research – DevOps continuous integration practices and in-memory database development – to define the problem, design, build and implement the solution, analyze challenges encountered, and make adjustments. The result is the transformation of a large test environment originally based on Oracle databases into a flexible and fast embedded in-memory architecture. The main challenges involved overcoming the differences between the SQL specifications supported by the development and production systems and optimizing the test runtime performance. The paper contributes to theory and practice by presenting one of the first studies showing a real-world implementation of a successful database test architecture that enables continuous integration, and identifying technical design principles for database test architectures in general

    Context-Aware Service Creation On The Semantic Web

    Get PDF
    With the increase of the computational power of mobile devices, their new capabilities and the addition of new context sensors, it is possible to obtain more information from mobile users and to offer new ways and tools to facilitate the content creation process. All this information can be exploited by the service creators to provide mobile services with higher degree of personalization that translate into better experiences. Currently on the web, many data sources containing UGC provide access to them through classical web mechanisms (built on a small set of standards), that is, custom web APIs that promote the fragmentation of the Web. To address this issue, Tim Berners-Lee proposed the Linked Data principles to provide guidelines for the use of standard web technologies, thus allowing the publication of structured on the Web that can be accessed using standard database mechanisms. The increase of Linked Data published on the web, increases opportunities for mobile services take advantage of it as a huge source of data, information and knowledge, either user-generated or not. This dissertation proposes a framework for creating mobile services that exploit the context information, generated content of its users and the data, information and knowledge present on the Web of Data. In addition we present, the cases of different mobile services created to take advantage of these elements and in which the proposed framework have been implemented (at least partially). Each of these services belong to different domains and each of them highlight the advantages provided to their end user

    Liquid stream processing on the web: a JavaScript framework

    Get PDF
    The Web is rapidly becoming a mature platform to host distributed applications. Pervasive computing application running on the Web are now common in the era of the Web of Things, which has made it increasingly simple to integrate sensors and microcontrollers in our everyday life. Such devices are of great in- terest to Makers with basic Web development skills. With them, Makers are able to build small smart stream processing applications with sensors and actuators without spending a fortune and without knowing much about the technologies they use. Thanks to ongoing Web technology trends enabling real-time peer-to- peer communication between Web-enabled devices, Web browsers and server- side JavaScript runtimes, developers are able to implement pervasive Web ap- plications using a single programming language. These can take advantage of direct and continuous communication channels going beyond what was possible in the early stages of the Web to push data in real-time. Despite these recent advances, building stream processing applications on the Web of Things remains a challenging task. On the one hand, Web-enabled devices of different nature still have to communicate with different protocols. On the other hand, dealing with a dynamic, heterogeneous, and volatile environment like the Web requires developers to face issues like disconnections, unpredictable workload fluctuations, and device overload. To help developers deal with such issues, in this dissertation we present the Web Liquid Streams (WLS) framework, a novel streaming framework for JavaScript. Developers implement streaming operators written in JavaScript and may interactively and dynamically define a streaming topology. The framework takes care of deploying the user-defined operators on the available devices and connecting them using the appropriate data channel, removing the burden of dealing with different deployment environments from the developers. Changes in the semantic of the application and in its execution environment may be ap- plied at runtime without stopping the stream flow. Like a liquid adapts its shape to the one of its container, the Web Liquid Streams framework makes streaming topologies flow across multiple heterogeneous devices, enabling dynamic operator migration without disrupting the data flow. By constantly monitoring the execution of the topology with a hierarchical controller infrastructure, WLS takes care of parallelising the operator execution across multiple devices in case of bottlenecks and of recovering the execution of the streaming topology in case one or more devices disconnect, by restarting lost operators on other available devices
    • 

    corecore