290 research outputs found

    QoS-Aware Middleware for Web Services Composition

    Get PDF
    The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online Business-to-Business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different Quality of Service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming

    A Framework for QoS-aware Execution of Workflows over the Cloud

    Full text link
    The Cloud Computing paradigm is providing system architects with a new powerful tool for building scalable applications. Clouds allow allocation of resources on a "pay-as-you-go" model, so that additional resources can be requested during peak loads and released after that. However, this flexibility asks for appropriate dynamic reconfiguration strategies. In this paper we describe SAVER (qoS-Aware workflows oVER the Cloud), a QoS-aware algorithm for executing workflows involving Web Services hosted in a Cloud environment. SAVER allows execution of arbitrary workflows subject to response time constraints. SAVER uses a passive monitor to identify workload fluctuations based on the observed system response time. The information collected by the monitor is used by a planner component to identify the minimum number of instances of each Web Service which should be allocated in order to satisfy the response time constraint. SAVER uses a simple Queueing Network (QN) model to identify the optimal resource allocation. Specifically, the QN model is used to identify bottlenecks, and predict the system performance as Cloud resources are allocated or released. The parameters used to evaluate the model are those collected by the monitor, which means that SAVER does not require any particular knowledge of the Web Services and workflows being executed. Our approach has been validated through numerical simulations, whose results are reported in this paper

    Flexible runtime support of business processes under rolling planning horizons

    Get PDF
    This work has been motivated by the needs we discovered when analyzing real-world processes from the healthcare domain that have revealed high flexibility demands and complex temporal constraints. When trying to model these processes with existing languages, we learned that none of the latter was able to fully address these needs. This motivated us to design TConDec-R, a declarative process modeling language enabling the specification of complex temporal constraints. Enacting business processes based on declarative process models, however, introduces a high complexity due to the required optimization of objective functions, the handling of various temporal constraints, the concurrent execution of multiple process instances, the management of crossinstance constraints, and complex resource allocations. Consequently, advanced user support through optimized schedules is required when executing the instances of such models. In previous work, we suggested a method for generating an optimized enactment plan for a given set of process instances created from a TConDec-R model. However, this approach was not applicable to scenarios with uncertain demands in which the enactment of newly created process instances starts continuously over time, as in the considered healthcare scenarios. Here, the process instances to be planned within a specific timeframe cannot be considered in isolation from the ones planned for future timeframes. To be able to support such scenarios, this article significantly extends our previous work by generating optimized enactment plans under a rolling planning horizon. We evaluate the approach by applying it to a particularly challenging healthcare process scenario, i.e., the diagnostic procedures required for treating patients with ovarian carcinoma in a Woman Hospital. The application of the approach to this sophisticated scenario allows avoiding constraint violations and effectively managing shared resources, which contributes to reduce the length of patient stays in the hospital.Ministerio de EconomĂ­a y Competitividad TIN2016-76956-C3-2-RMinisterio de Ciencia e InnovaciĂłn PID2019-105455 GB-C3

    The EnMAP user interface and user request scenarios

    Get PDF
    EnMAP (Environmental Mapping and Analysis Program) is a German hyperspectral satellite mission providing high quality hyperspectral image data on a timely and frequent basis. Main objective is to investigate a wide range of ecosystem parameters encompassing agriculture, forestry, soil and geological environments, coastal zones and inland waters. The EnMAP Ground Segment will be designed, implemented and operated by the German Aerospace Center (DLR). The Applied Remote Sensing Cluster (DFD) at DLR is responsible for the establishment of a user interface. This paper provides details on the concept, design and functionality of the EnMAP user interface and a first analysis about potential user scenarios. The user interface consists of two online portals. The EnMAP portal (www.enmap.org) provides general EnMAP mission information. It is the central entry point for all international users interested to learn about the EnMAP mission, its objectives, status, data products and processing chains. The EnMAP Data Access Portal (EDAP) is the entry point for any EnMAP data requests and comprises a set of service functions offered for every registered user. The scientific user is able to task the EnMAP HSI for Earth observations by providing tasking parameters, such as area, temporal aspects and allowed tilt angle. In the second part of that paper different user scenarios according to the previously explained tasking parameters are presented and discussed in terms of their feasibility for scientific projects. For that purpose, a prototype of the observation planning tool enabling visualization of different user request scenarios was developed. It can be shown, that the number of data takes in a certain period of time increases with the latitude of the observation area. Further, the observation area can differ with the tilt angle of the satellite. Such findings can be crucial for the planning of remote sensing based projects, especially for those investigating ecosystem gradients in the time domain

    Cognitively-inspired Agent-based Service Composition for Mobile & Pervasive Computing

    Full text link
    Automatic service composition in mobile and pervasive computing faces many challenges due to the complex and highly dynamic nature of the environment. Common approaches consider service composition as a decision problem whose solution is usually addressed from optimization perspectives which are not feasible in practice due to the intractability of the problem, limited computational resources of smart devices, service host's mobility, and time constraints to tailor composition plans. Thus, our main contribution is the development of a cognitively-inspired agent-based service composition model focused on bounded rationality rather than optimality, which allows the system to compensate for limited resources by selectively filtering out continuous streams of data. Our approach exhibits features such as distributedness, modularity, emergent global functionality, and robustness, which endow it with capabilities to perform decentralized service composition by orchestrating manifold service providers and conflicting goals from multiple users. The evaluation of our approach shows promising results when compared against state-of-the-art service composition models.Comment: This paper will appear on AIMS'19 (International Conference on Artificial Intelligence and Mobile Services) on June 2

    A Web 2.0 and OGC Standards Enabled Sensor Web Architecture for Global Earth Observing System of Systems

    Get PDF
    This paper will describe the progress of a 3 year research award from the NASA Earth Science Technology Office (ESTO) that began October 1, 2006, in response to a NASA Announcement of Research Opportunity on the topic of sensor webs. The key goal of this research is to prototype an interoperable sensor architecture that will enable interoperability between a heterogeneous set of space-based, Unmanned Aerial System (UAS)-based and ground based sensors. Among the key capabilities being pursued is the ability to automatically discover and task the sensors via the Internet and to automatically discover and assemble the necessary science processing algorithms into workflows in order to transform the sensor data into valuable science products. Our first set of sensor web demonstrations will prototype science products useful in managing wildfires and will use such assets as the Earth Observing 1 spacecraft, managed out of NASA/GSFC, a UASbased instrument, managed out of Ames and some automated ground weather stations, managed by the Forest Service. Also, we are collaborating with some of the other ESTO awardees to expand this demonstration and create synergy between our research efforts. Finally, we are making use of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards and some Web 2.0 capabilities to Beverage emerging technologies and standards. This research will demonstrate and validate a path for rapid, low cost sensor integration, which is not tied to a particular system, and thus be able to absorb new assets in an easily evolvable, coordinated manner. This in turn will help to facilitate the United States contribution to the Global Earth Observation System of Systems (GEOSS), as agreed by the U.S. and 60 other countries at the third Earth Observation Summit held in February of 2005

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time
    • …
    corecore