23 research outputs found

    Performance analysis of multi-institutional data sharing in the Clouds4Coordination system

    Get PDF
    Cloud computing is used extensively in Architecture/ Engineering/ Construction projects for storing data and running simulations on building models (e.g. energy efficiency/environmental impact). With the emergence of multi-Clouds it has become possible to link such systems and create a distributed cloud environment. A multi-Cloud environment enables each organisation involved in a collaborative project to maintain its own computational infrastructure/ system (with the associated data), and not have to migrate to a single cloud environment. Such infrastructure becomes efficacious when multiple individuals and organisations work collaboratively, enabling each individual/ organisation to select a computational infrastructure that most closely matches its requirements. We describe the “Clouds-for-Coordination” system, and provide a use case to demonstrate how such a system can be used in practice. A performance analysis is carried out to demonstrate how effective such a multi-Cloud system can be, reporting “aggregated-time-to-complete” metric over a number of different scenarios

    A threshold secure data sharing scheme for federated clouds

    Full text link
    Cloud computing allows users to view computing in a new direction, as it uses the existing technologies to provide better IT services at low-cost. To offer high QOS to customers according SLA, cloud services broker or cloud service provider uses individual cloud providers that work collaboratively to form a federation of clouds. It is required in applications like Real-time online interactive applications, weather research and forecasting etc., in which the data and applications are complex and distributed. In these applications secret data should be shared, so secure data sharing mechanism is required in Federated clouds to reduce the risk of data intrusion, the loss of service availability and to ensure data integrity. So In this paper we have proposed zero knowledge data sharing scheme where Trusted Cloud Authority (TCA) will control federated clouds for data sharing where the secret to be exchanged for computation is encrypted and retrieved by individual cloud at the end. Our scheme is based on the difficulty of solving the Discrete Logarithm problem (DLOG) in a finite abelian group of large prime order which is NP-Hard. So our proposed scheme provides data integrity in transit, data availability when one of host providers are not available during the computation.Comment: 8 pages, 3 Figures, International Journal of Research in Computer Science 2012. arXiv admin note: text overlap with arXiv:1003.3920 by other author

    A computational model to support in-network data analysis in federated ecosystems

    Get PDF
    Software-defined networks (SDNs) have proven to be an efficacious tool for undertaking complex data analysis and manipulation within data intensive applications. SDN technology allows us to separate the data path from the control path, enabling in-network processing capabilities to be supported as data is migrated across the network. We propose to leverage software-defined networking (SDN) to gain control over the data transport service with the purpose of dynamically establishing data routes such that we can opportunistically exploit the latent computational capabilities located along the network path. This strategy allows us to minimize waiting times at the destination data center and to cope with spikes in demand for computational capability. We validate our approach using a smart building application in a multi-cloud infrastructure. Results show how the in-transit processing strategy increases the computational capabilities of the infrastructure and influences the percentage of job completion without significantly impacting costs and overheads

    Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Get PDF

    Deadline constrained video analysis via in-transit computational environments

    Get PDF
    Combining edge processing (at data capture site) with analysis carried out while data is enroute from the capture site to a data center offers a variety of different processing models. Such in-transit nodes include network data centers that have generally been used to support content distribution (providing support for data multicast and caching), but have recently started to offer user-defined programmability, through Software Defined Networks (SDN) capability, e.g. OpenFlow and Network Function Visualization (NFV). We demonstrate how this multi-site computational capability can be aggregated to support video analytics, with Quality of Service and cost constraints (e.g. latency-bound analysis). The use of SDN technology enables separation of the data path from the control path, enabling in-network processing capabilities to be supported as data is migrated across the network. We propose to leverage SDN capability to gain control over the data transport service with the purpose of dynamically establishing data routes such that we can opportunistically exploit the latent computational capabilities located along the network path. Using a number of scenarios, we demonstrate the benefits and limitations of this approach for video analysis, comparing this with the baseline scenario of undertaking all such analysis at a data center located at the core of the infrastructure.TS

    Feedback-control & queueing theory-based resource management for streaming applications

    Get PDF
    Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud) – where the allocation of new resources can be based on: (i) differences between sites, i.e. types of resources supported (e.g. GPU vs. CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little’s Law –a widely used result in queuing theory– can be adapted to support dynamic control in the context of such resource provisioning

    Fog paradigm for local energy management systems

    Get PDF
    Cloud Computing infrastructures have been extensively deployed to support energy computation within built environments. This has ranged from predicting potential energy demand for a building (or a group of buildings), undertaking heat profile/energy distribution simulations, to understanding the impact of climate and weather on building operation. Cloud computing usage in these scenarios have benefited from resource elasticity, where the number and types of resources can change based on the complexity of the simulation being considered. While there are numerous advantages of using a cloud based energy management system, there are also significant limitations. For instance, many such systems assume that the data has been pre-staged at a cloud platform prior to simulation, and do not take account of data transfer times from the building to the simulation platform. The need for supporting computation at edge resources, which can be hosted within the building itself or shared within a building complex, has become important over recent year. Additionally, network connectivity between the sensing infrastructure within a built environment and a data centre where analysis is to be carried out can be intermittent or may fail. There is therefore also a need to better understand how computation/analysis can be carried out closer to the data capture site to complement analysis that would be undertaken at the data centre. We describe how the Fog computing paradigm can be used to support some of these requirements, extending the capability of a data centre to support energy simulation within built environments

    Towards distributed architecture for collaborative cloud services in community networks

    Get PDF
    Internet and communication technologies have lowered the costs for communities to collaborate, leading to new services like user-generated content and social computing, and through collaboration, collectively built infrastructures like community networks have also emerged. Community networks get formed when individuals and local organisations from a geographic area team up to create and run a community-owned IP network to satisfy the community’s demand for ICT, such as facilitating Internet access and providing services of local interest. The consolidation of today’s cloud technologies offers now the possibility of collectively built community clouds, building upon user-generated content and user-provided networks towards an ecosystem of cloud services. To address the limitation and enhance utility of community networks, we propose a collaborative distributed architecture for building a community cloud system that employs resources contributed by the members of the community network for provisioning infrastructure and software services. Such architecture needs to be tailored to the specific social, economic and technical characteristics of the community networks for community clouds to be successful and sustainable. By real deployments of clouds in community networks and evaluation of application performance, we show that community clouds are feasible. Our result may encourage collaborative innovative cloud-based services made possible with the resources of a community.Peer ReviewedPostprint (author’s final draft

    QoS-aware trust establishment for cloud federation

    Get PDF
    Cloud federation enables inter-layer resource exchanges among multiple, heterogeneous cloud service providers. This article proposes a Quality of Service (QoS) aware trust model for effective resource allocation in response to the various user requests within the Clouds4Coordination (C4C) federation system. This QoS mainly comprises of nine parameters combined into three categories: (i) node profile, (ii) reliability, and (iii) competence. Numerical values for these parameters are computed every ‘t’ seconds for each cloud provider. All values measured over an interval Δt are further processed by the proposed model to evaluate the utility associated with a provider (referred to as a discipline in the presented case study). The decision about interacting with a discipline in a collaborative project is based on this utility value. The systems architecture, evaluation methodology, proposed model, and experimental evaluation on a practical test bed is outlined. The proposed QoS-aware trust evaluation mechanism allows selection of the most useful (based on a utility value) providers. The proposed approach can be used to support federation of cloud services across a number of different application domains

    Risk-based service selection in federated clouds

    Get PDF
    The Cloud Service Provider (CSP) marketplace has continued to expand in recent years. Although a few major providers dominate (e.g. AWS, Google Cloud, Microsoft Azure), there are also a number of specialist providers offering hosting services and computing platforms. A single Cloud provider can also offer a marketplace for their own offerings - e.g. the AWS Marketplace, which enables third party libraries to be deployed as services within AWS instances. In order to determine whether a particular CSP should be used, clients need to apply preliminary assessment and evaluation when provisioning services on such a provider. Service selection can be realised based on different decision-making criteria, to enable a more informed selection process for clients. Trust can be utilised as a mechanism to inform such selection decisions. Trust can have different representations and utilise parameters derived from past interactions. Trust therefore represents an expression of risk associated with a service exchange between clients and providers. We present a trust-based risk evaluation for CSP selection in federated clouds, with a particular focus on security & data privacy. We use a scenario from an Architecture, Engineering & Construction (AEC) project to demonstrate how such a selection can be made, and is of benefit in developing the federated system. A methodology for the selection process is outlined, making use of metrics and certification processes from the Cloud Security Alliance. The proposed approach can also be generalised to other application domains with similar requirements
    corecore