3,365 research outputs found

    Enabling IoT stream management in multi-cloud environment by orchestration

    Get PDF
    (c) 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Every-Day lives are becoming increasingly instrumented by electronic devices and any kind of computer-based (distributed) service. As a result, organizations need to analyse an enormous amounts of data in order to increase their incomings or to improve their services. Anyway, setting-up a private infrastructure to execute analytics over Big Data is still expensive. The exploitation of Cloud infrastructure in IoT Stream management is appealing because of costs reductions and potentiality of storage, network and computing resources. The Cloud can consistently reduce the cost of analysis of data from different sources, opening analytics to big storages in a multi-cloud environment. Anyway, creating and executing this kind of service is very complex since different resources have to be provisioned and coordinated depending on users' needs. Orchestration is a solution to this problem, but it requires proper languages and methodologies for automatic composition and execution. In this work we propose a methodology for composition of services used for analyses of different IoT Stream and, in general, Big Data sources: in particular an Orchestration language is reported able to describe composite services and resources in a multi-cloud environment.Peer ReviewedPostprint (author's final draft

    A Cross-layer Monitoring Solution based on Quality Models

    Get PDF
    In order to implement cross-organizational workflows and to realize collaborations between small and medium enterprises (SMEs), the use ofWeb service technology, Service-Oriented Architecture and Infrastructure-as-a- Service (IaaS) has become a necessity. Based on these technologies, the need for monitoring the quality of (a) the acquired resources, (b) the services offered to the final users and (c) the workflow-based procedures used by SMEs in order to use services, has come to the fore. To tackle this need, we propose four metric Quality Models that cover quality terms for the Workflow, Service and Infrastructure layers and an additional one for expressing the equality and inter-dependency relations between the previous ones. To support these models we have implemented a cross-layer monitoring system, whose main advantages are the layer-specific metric aggregators and an event pattern discoverer for processing the monitoring log. Our evaluation is based on the performance and accuracy aspects of the proposed cross-layer monitoring system

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    An Approach for Securing Cloud-Based Wide Area Monitoring of Smart Grid Systems

    Get PDF
    Computing power and flexibility provided by cloud technologies represent an opportunity for Smart Grid applications, in general, and for Wide Area Monitoring Systems, in particular. Even though the cloud model is considered efficient for Smart Grids, it has stringent constraints in terms of security and reliability. An attack to the integrity or confidentiality of data may have a devastating impact for the system itself and for the surrounding environment. The main security risk is represented by malicious insiders, i.e., malevolent employees having privileged access to the hosting machines. In this paper, we evaluate a powerful hardening approach that could be leveraged to protect synchrophasor data processed at cloud level. In particular, we propose the use of homomorphic encryption to address risks related to malicious insiders. Our goal is to estimate the feasibility of such a security solution by verifying the compliance with frame rate requirements typical of synchrophasor standards

    An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers

    Full text link
    Today's Cloud applications are dominated by composite applications comprising multiple computing and data components with strong communication correlations among them. Although Cloud providers are deploying large number of computing and storage devices to address the ever increasing demand for computing and storage resources, network resource demands are emerging as one of the key areas of performance bottleneck. This paper addresses network-aware placement of virtual components (computing and data) of multi-tier applications in data centers and formally defines the placement as an optimization problem. The simultaneous placement of Virtual Machines and data blocks aims at reducing the network overhead of the data center network infrastructure. A greedy heuristic is proposed for the on-demand application components placement that localizes network traffic in the data center interconnect. Such optimization helps reducing communication overhead in upper layer network switches that will eventually reduce the overall traffic volume across the data center. This, in turn, will help reducing packet transmission delay, increasing network performance, and minimizing the energy consumption of network components. Experimental results demonstrate performance superiority of the proposed algorithm over other approaches where it outperforms the state-of-the-art network-aware application placement algorithm across all performance metrics by reducing the average network cost up to 67% and network usage at core switches up to 84%, as well as increasing the average number of application deployments up to 18%.Comment: Submitted for publication consideration for the Journal of Network and Computer Applications (JNCA). Total page: 28. Number of figures: 15 figure
    • …
    corecore