33 research outputs found

    A Conceptual Architecture for a Quantum-HPC Middleware

    Full text link
    Quantum computing promises potential for science and industry by solving certain computationally complex problems faster than classical computers. Quantum computing systems evolved from monolithic systems towards modular architectures comprising multiple quantum processing units (QPUs) coupled to classical computing nodes (HPC). With the increasing scale, middleware systems that facilitate the efficient coupling of quantum-classical computing are becoming critical. Through an in-depth analysis of quantum applications, integration patterns and systems, we identified a gap in understanding Quantum-HPC middleware systems. We present a conceptual middleware to facilitate reasoning about quantum-classical integration and serve as the basis for a future middleware system. An essential contribution of this paper lies in leveraging well-established high-performance computing abstractions for managing workloads, tasks, and resources to integrate quantum computing into HPC systems seamlessly.Comment: 12 pages, 3 figure

    VM Image Repository and Distribution Models for Federated Clouds: State of the Art, Possible Directions and Open Issues

    Get PDF
    The emerging trend of Federated Cloud models enlist virtualization as a significant concept to offer a large scale distributed Infrastructure as a Service collaborative paradigm to end users. Virtualization leverage Virtual Machines (VM) instantiated from user specific templates labelled as VM Images (VMI). To this extent, the rapid provisioning of VMs with varying user requests ensuring Quality of Service (QoS) across multiple cloud providers largely depends upon the image repository architecture and distribution policies. We discuss the possible state-of-art in VMI storage repository and distribution mechanisms for efficient VM provisioning in federated clouds. In addition, we present and compare various representative systems in this realm. Furthermore, we define a design space, identify current limitations, challenges and open trends for VMI repositories and distribution techniques within federated infrastructure

    A Two-Stage Multi-Objective Optimization of Erasure Coding in Overlay Networks

    Get PDF
    In the recent years, overlay networks have emerged as a crucial platform for deployment of various distributed applications. Many of these applications rely on data redundancy techniques, such as erasure coding, to achieve higher fault tolerance. However, erasure coding applied in large scale overlay networks entails various overheads in terms of storage, latency and data rebuilding costs. These overheads are largely attributed to the selected erasure coding scheme and the encoded chunk placement in the overlay network. This paper explores a multi-objective optimization approach for identifying appropriate erasure coding schemes and encoded chunk placement in overlay networks. The uniqueness of our approach lies in the consideration of multiple erasure coding objectives such as encoding rate and redundancy factor, with overlay network performance characteristics like storage consumption, latency and system reliability. Our approach enables a variety of tradeoff solutions with respect to these objectives to be identified in the form of a Pareto front. To solve this problem, we propose a novel two stage multiobjective evolutionary algorithm, where the first stage determines the optimal set of encoding schemes, while the second stage optimizes placement of the corresponding encoded data chunks in overlay networks of varying sizes. We study the performance of our method by generating and analyzing the Pareto optimal sets of tradeoff solutions. Experimental results demonstrate that the Pareto optimal set produced by our multi-objective approach includes and even dominates the chunk placements delivered by a related state-of-the-art weighted sum method

    Most, And Least, Compact Spanning Trees of a Graph

    Get PDF
    We introduce the concept of Most, and Least, Compact Spanning Trees -- denoted respectively by T(G)T^*(G) and T#(G)T^\#(G) -- of a simple, connected, undirected and unweighted graph G(V,E,W)G(V, E, W). For a spanning tree T(G)T(G)T(G) \in \mathcal{T}(G) to be considered T(G)T^*(G), where T(G)\mathcal{T}(G) represents the set of all the spanning trees of the graph GG, it must have the least sum of inter-vertex pair shortest path distances from amongst the members of the set T(G)\mathcal{T}(G). Similarly, for it to be considered T#(G)T^\#(G), it must have the highest sum of inter-vertex pair shortest path distances. In this work, we present an iteratively greedy rank-and-regress method that produces at least one T(G)T^*(G) or T#(G)T^\#(G) by eliminating one extremal edge per iteration.The rank function for performing the elimination is based on the elements of the matrix of relative forest accessibilities of a graph and the related forest distance. We provide empirical evidence in support of our methodology using some standard graph families; and discuss potentials for computational efficiencies, along with relevant trade-offs, to enable the extraction of T(G)T^*(G) and T#(G)T^\#(G) within reasonable time limits on standard platforms

    Towards an Environment for Efficient and Transparent Virtual Machine Operations: The ENTICE Approach

    Get PDF
    Cloud computing is based on Virtual Machines (VM) or containers, which provide their own software execution environment that can be deployed by facilitating technologies on top of various physical hardware. The use of VMs or containers represents an efficient way to automatize the overall software engineering and operation life-cycle. Some of the benefits include elasticity and high scalability, which increases the utilization efficiency and decreases the operational costs. VMs or containers as software artifacts are created using provider-specific templates and are stored in proprietary or public repositories for further use. However, technology specific choices may reduce their portability, lead to a vendor lock-in, particularly when applications need to run in federated Clouds. In this paper we present the current state of development of the novel concept of a VM repository and operational environment for federated Clouds named ENTICE. The ENTICE environment has been designed to receive unmodified and functionally complete VM images from its users, and transparently tailor and optimise them for specific Cloud infrastructures with respect to their size, configuration, and geographical distribution, such that they are loaded, delivered, and executed faster and with improved QoS compared to their current behaviour. Furthermore, in this work a specific use case scenario for the ENTICE environment has been provided and the underlying novel technologies have been presented

    Survey on Security Issues in Cloud Computing Environment

    Get PDF
    ABSTRACT: Cloud has introduced a new concept of provision of on-demand resource to services on internet. Cloud provides an attractive model while allowing the service providers to save cost. It allows users to free themselves of tasks of resource management i.e. most efficient use of resources. It also allows centralization of information and resources so that the users can access them from anywhere using the internet. Generally the resources used to provide services belong to a third party. As the users don't have to invest capital in such resources it decreases costs. Even though cloud model is lucrative, users have been hesitant in adopting it, the major reason being security concern regarding their private data. In this paper we discuss about various security concerns in cloud environment

    Blockchain-based prosumer incentivization for peak mitigation through temporal aggregation and contextual clustering

    Get PDF
    Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand. Flattening the usage curve can result in cost savings, both for the power companies and the end users. Integration of renewable energy into the energy infrastructure presents an opportunity to use excess renewable generation to supplement supply and alleviate peaks. In addition, demand side management can shift the usage from peak to off-peak times and reduce the magnitude of peaks. In this work, we present a data driven approach for incentive-based peak mitigation. Understanding user energy profiles is an essential step in this process. We begin by analysing a popular energy research dataset published by the Ausgrid corporation. Extracting aggregated user energy behavior in temporal contexts and semantic linking and contextual clustering give us insight into consumption and rooftop solar generation patterns. We implement, and performance test a blockchain-based prosumer incentivization system. The smart contract logic is based on our analysis of the Ausgrid dataset. Our implementation is capable of supporting 792,540 customers with a reasonably low infrastructure footprint.publishedVersio
    corecore