92 research outputs found

    Eliciting the End-to-End Behavior of SOA Applications in Clouds

    Get PDF
    Availability and performance are key issues in SOA cloud applications. Those applications can be represented as a graph spanning multiple Cloud and on-premises environments, forming a very complex computing system that supports increasing numbers and types of users, business transactions, and usage scenarios. In order to rapidly find, predict, and proactively prevent root causes of issues, such as performance degradations and runtime errors, we developed a monitoring solution which is able to elicit the end-to-end behavior of those applications. We insert lightweight components into SOA frameworks and clients thereby keeping the monitoring impact minimal. Monitoring data collected from call chains is used to assist in issues related to performance, errors and alerts, as well as business and IT transactions

    Service-Oriented Data Mining

    Get PDF

    ACHIEVING AUTONOMIC SERVICE ORIENTED ARCHITECTURE USING CASE BASED REASONING

    Get PDF
    Service-Oriented Architecture (SOA) enables composition of large and complex computational units out of the available atomic services. However, implementation of SOA, for its dynamic nature, could bring about challenges in terms of service discovery, service interaction, service composition, robustness, etc. In the near future, SOA will often need to dynamically re-configuring and re-organizing its topologies of interactions between the web services because of some unpredictable events, such as crashes or network problems, which will cause service unavailability. Complexity and dynamism of the current and future global network system require service architecture that is capable of autonomously changing its structure and functionality to meet dynamic changes in the requirements and environment with little human intervention. This then needs to motivate the research described throughout this thesis. In this thesis, the idea of introducing autonomy and adapting case-based reasoning into SOA in order to extend the intelligence and capability of SOA is contributed and elaborated. It is conducted by proposing architecture of an autonomic SOA framework based on case-based reasoning and the architectural considerations of autonomic computing paradigm. It is then followed by developing and analyzing formal models of the proposed architecture using Petri Net. The framework is also tested and analyzed through case studies, simulation, and prototype development. The case studies show feasibility to employing case-based reasoning and autonomic computing into SOA domain and the simulation results show believability that it would increase the intelligence, capability, usability and robustness of SOA. It was shown that SOA can be improved to cope with dynamic environment and services unavailability by incorporating case-based reasoning and autonomic computing paradigm to monitor and analyze events and service requests, then to plan and execute the appropriate actions using the knowledge stored in knowledge database

    Where are the passengers? A Grid-Based Gaussian Mixture Model for taxi bookings

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centre @ Singapore Funding Initiativ

    Flexible Service Provisioning for Heterogeneous Sensor Networks

    Get PDF
    This paper presents Servilla, a highly flexible service provisioning framework for heterogeneous wireless sensor networks. Its service-oriented programming model and middleware enable developers to construct platform-independent applications over a dynamic set of devices with diverse computational resources and sensors. A salient feature of Servilla is its support for dynamic discovery and binding to local and remote services, which enables flexible and energy-efficient in-network collaboration among heterogeneous devices. Furthermore, Servilla provides a modular middleware architecture that can be easily tailored for devices with a wide range of resources, allowing even resource-limited devices to provide services and leverage resource-rich devices for in-network processing. Microbenchmarks demonstrate the efficiency of Servilla\u27s middleware, and an application case study for structural health monitoring on a heterogeneous testbed consisting of TelosB and Imote2 nodes demonstrates the efficacy of its programming model.This paper is replaced by tech report WUCSE-2009-2

    Services State of Play - Compliance Testing and Interoperability Checking

    Get PDF
    The document contains an inventory of existing solutions for compliance testing and interoperability checking of services, the assumption being that the services are web services. Even if the emphasis is on geographical information and therefore on Geographical Information Systems, the document describes applicable solutions outside the geographical Information System domain.JRC.H.6-Spatial data infrastructure

    A Comparative Analysis of Machine Learning Techniques For Foreclosure Prediction

    Get PDF
    The current decline in the U.S. economy was accompanied by an increase in foreclosure rates starting in 2007. Though the earliest figures for 2009 - 2010 indicate a significant decrease, foreclosure of homes in the U.S. is still at an alarming level (Gutierrez, 2009a). Recent research at the University of Michigan suggested that many foreclosures could have been averted had there been a predictive system that did not only rely on credit scores and loan-to-value ratios (DeGroat, 2009). Furthermore, Grover, Smith & Todd (2008) contend that foreclosure prediction can enhance the efficiency of foreclosure mitigation by facilitating the allocation of resources to areas where predicted foreclosure rates will be high. The primary goal of this dissertation was to develop a foreclosure prediction model that builds upon established bankruptcy and credit scoring models. The study utilized and compared the predictive accuracy of three supervised machine learning (ML) techniques when applied to mortgage data. The selected ML techniques were: ML1. Classification Trees ML2. Support Vector Machines (SVM) ML3. Genetic Programming The data used for the study is comprised of mortgage data, demographic metrics and certain macro-economic indicators that are available at the time of the inception of the loan. The hypothesis of the study was based on the assumption that foreclosure rates, and associated actions, are dependent on critical demographic (age, gender), economic (per capita income, inflation) and regional variables (predatory lending, unemployment index). The task of the machine learning techniques was to identify a function that well approximates the relationship between these explanatory variables and the binary outcome of interest (mortgage status in +3 years from inception). The predictive accuracy of ML1 through ML3 was significantly better than expected given the size of the recordset (1000) and the number of input variables (~110). Each ML technique achieved classification accuracy better than 75%, with ML3 scoring in the upper 90s. Given such high scores, it was concluded that the hypothesis was satisfied and that ML techniques are suitable for prediction tasks in this problem domain

    A Model for Scientific Workflows with Parallel and Distributed Computing

    Get PDF
    In the last decade we witnessed an immense evolution of the computing infrastructures in terms of processing, storage and communication. On one hand, developments in hardware architectures have made it possible to run multiple virtual machines on a single physical machine. On the other hand, the increase of the available network communication bandwidth has enabled the widespread use of distributed computing infrastructures, for example based on clusters, grids and clouds. The above factors enabled different scientific communities to aim for the development and implementation of complex scientific applications possibly involving large amounts of data. However, due to their structural complexity, these applications require decomposition models to allow multiple tasks running in parallel and distributed environments. The scientific workflow concept arises naturally as a way to model applications composed of multiple activities. In fact, in the past decades many initiatives have been undertaken to model application development using the workflow paradigm, both in the business and in scientific domains. However, despite such intensive efforts, current scientific workflow systems and tools still have limitations, which pose difficulties to the development of emerging large-scale, distributed and dynamic applications. This dissertation proposes the AWARD model for scientific workflows with parallel and distributed computing. AWARD is an acronym for Autonomic Workflow Activities Reconfigurable and Dynamic. The AWARD model has the following main characteristics. It is based on a decentralized execution control model where multiple autonomic workflow activities interact by exchanging tokens through input and output ports. The activities can be executed separately in diverse computing environments, such as in a single computer or on multiple virtual machines running on distributed infrastructures, such as clusters and clouds. It provides basic workflow patterns for parallel and distributed application decomposition and other useful patterns supporting feedback loops and load balancing. The model is suitable to express applications based on a finite or infinite number of iterations, thus allowing to model long-running workflows, which are typical in scientific experimention. A distintive contribution of the AWARD model is the support for dynamic reconfiguration of long-running workflows. A dynamic reconfiguration allows to modify the structure of the workflow, for example, to introduce new activities, modify the connections between activity input and output ports. The activity behavior can also be modified, for example, by dynamically replacing the activity algorithm. In addition to the proposal of a new workflow model, this dissertation presents the implementation of a fully functional software architecture that supports the AWARD model. The implemented prototype was used to validate and refine the model across multiple workflow scenarios whose usefulness has been demonstrated in practice clearly, through experimental results, demonstrating the advantages of the major characteristics and contributions of the AWARD model. The implemented prototype was also used to develop application cases, such as a workflow to support the implementation of the MapReduce model and a workflow to support a text mining application developed by an external user. The extensive experimental work confirmed the adequacy of the AWARD model and its implementation for developing applications that exploit parallelism and distribution using the scientific workflows paradigm
    • …
    corecore