1,387 research outputs found

    SMART ARRAY

    Get PDF
    Devices, systems, and methods for generating arrays are disclosed herein. In one aspect, a method for generating an array includes calculating a true metric for a peer set. The peer set includes peer values and weight percentages corresponding to each of the peer values. The method further includes determining that the peer set does not comply with at least one privacy policy rule. The method further includes generating the array based on determining that the peer set does not comply with the at least one privacy policy rule. The array represents the true metric and is generated by (i) calculating an upper bound based on the true metric, a weighted standard deviation of the peer set, and a first random number and (ii) calculating a lower bound based on the true metric, the weighted standard deviation of the peer set, and a second random number

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    A comparison of integration architectures

    Get PDF
    This paper presents GenSIF, a Generic Systems Integration Framework. GenSIF features a pre-planned development process on a domain-wide basis and facilitates system integration and project coordination for very large, complex and distributed systems. Domain analysis, integration architecture design and infrastructure design are identified as the three main components of GenSIF. In the next step we map Beilcore\u27s OSCA interoperability architecture, ANSA, IBM\u27s SAA and Bull\u27s DCM into GenSIF. Using the GenSIF concepts we compare each of these architectures. GenSIF serves as a general framework to evaluate and position specific architecture. The OSCA architecture is used to discuss the impact of vendor architectures on application development. All opinions expressed in this paper, especially with regard to the OSCA architecture, are the opinions of the author and do not necessarily reflect the point of view of any of the mentioned companies

    Research Cloud Data Communities

    Get PDF
    Big Data, big science, the data deluge, these are topics we are hearing about more and more in our research pursuits. Then, through media hype, comes cloud computing, the saviour that is going to resolve our Big Data issues. However, it is difficult to pinpoint exactly what researchers can actually do with data and with clouds, how they get to exactly solve their Big Data problems, and how they get help in using these relatively new tools and infrastructure. Since the beginning of 2012, the NeCTAR Research Cloud has been running at the University of Melbourne, attracting over 1,650 users from around the country. This has not only provided an unprecedented opportunity for researchers to employ clouds in their research, but it has also given us an opportunity to clearly understand how researchers can more easily solve their Big Data problems. The cloud is now used daily, from running web servers and blog sites, through to hosting virtual laboratories that can automatically create hundreds of servers depending on research demand. Of course, it has also helped us understand that infrastructure isn’t everything. There are many other skillsets needed to help researchers from the multitude of disciplines use the cloud effectively. How can we solve Big Data problems on cloud infrastructure? One of the key aspects are communities based on research platforms: Research is built on collaboration, connection and community, and researchers employ platforms daily, whether as bio-imaging platforms, computational platforms or cloud platforms (like DropBox). There are some important features which enabled this to work.. Firstly, the borders to collaboration are eased, allowing communities to access infrastructure that can be instantly built to be completely open, through to completely closed, all managed securely through (nationally) standardised interfaces. Secondly, it is free and easy to build servers and infrastructure, but it is also cheap to fail, allowing for experimentation not only at a code-level, but at a server or infrastructure level as well. Thirdly, this (virtual) infrastructure can be shared with collaborators, moving the practice of collaboration from sharing papers and code to sharing servers, pre-configured and ready to go. And finally, the underlying infrastructure is built with Big Data in mind, co-located with major data storage infrastructure and high-performance computers, and interconnected with high-speed networks nationally to research instruments. The research cloud is fundamentally new in that it easily allows communities of researchers, often connected by common geography (research precincts), discipline or long-term established collaborations, to build open, collaborative platforms. These open, sharable, and repeatable platforms encourage coordinated use and development, evolving to common community-oriented methods for Big Data access and data manipulation. In this paper we discuss in detail critical ingredients in successfully establishing these communities, as well as some outcomes as a result of these communities and their collaboration enabling platforms. We consider astronomy as an exemplar of a research field that has already looked to the cloud as a solution to the ensuing data tsunami

    Design and Implementation of the L-Bone and Logistical Tools

    Get PDF
    The purpose of this paper is to outline the design criteria and implementation of the Logistical Backbone (L-Bone) and the Logistical Tools. These tools, along with IBP and the exNode Library, allow storage to be used as a network resource. These are components of the Network Storage Stack, a design by the Logistical Computing and Internetworking Lab at the University of Tennessee. Having storage as a network resource enables users to do many things that are either difficult or not possible today, such as moving and sharing very large files across administrative domains, improving performance through caching and improving fault-tolerance through replication and striping. Next, this paper reviews the L-Bone, a directory service for Internet Backplane Protocol (IBP) storage servers (depots) which stores information about the depots and allows clients to query the service for depots matching specific requirements. The L-Bone has three major components: a client API, a stateless RPC server and a database backend. Because the L-Bone is intended to be a service available to anyone on the wide-area network, response time is critical. The current implementation provides a reliable service and a fast service. Average response times from remote clients are less than half a second. Lastly, this paper examines the Logistical Tools. The Logistical Tools are a set of command line tools wrapped around a C API. They provide a higher level of functionality built on top of the exNode Library as well as the L-Bone library, IBP library and the Network Weather Service (NWS) library. This set of tools allows a user to upload a file into an exNode, download the data from that exNode, add more replicas or remove replicas from the exNode, check the status of the exNode and modify the expiration times of the IBP allocations. To highlight the capabilities of these tools and the overall benefits of using exNodes, I perform tests that look at the performance improvements through local replication (caching) as well as tests that look at the higher levels of fault-tolerance through replication. These tests show that using replication for caching can improve access time from 2 to 16 times and that using simple replication can provide nearly 100% availability

    Grid service-based e-learning systems architecures

    Get PDF
    E-Learning has been a topic of increasing interest in recent years, due mainly to the fact that content and tool support can now be offered at a widely affordable level. As a result, many elearning platforms and systems have been developed. Client-server, peer-to-peer and recently Web services architectures often form the basis. Major drawbacks of these architectures are their limitations in terms of scalability and the availability and distribution of resources. This chapter investigates grid architectures in the context of e-learning as a proposed answer for this problem. The principles and technologies of Grid architectures are discussed and illustrated using learning technology scenarios and systems

    Cooperative high-performance storage in the accelerated strategic computing initiative

    Get PDF
    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop

    A study of System Interface Sets (SIS) for the host, target and integration environments of the Space Station Program (SSP)

    Get PDF
    System interface sets (SIS) for large, complex, non-stop, distributed systems are examined. The SIS of the Space Station Program (SSP) was selected as the focus of this study because an appropriate virtual interface specification of the SIS is believed to have the most potential to free the project from four life cycle tyrannies which are rooted in a dependance on either a proprietary or particular instance of: operating systems, data management systems, communications systems, and instruction set architectures. The static perspective of the common Ada programming support environment interface set (CAIS) and the portable common execution environment (PCEE) activities are discussed. Also, the dynamic perspective of the PCEE is addressed

    Big data and hydroinformatics

    Get PDF

    The Potential for Machine Learning Analysis over Encrypted Data in Cloud-based Clinical Decision Support - Background and Review

    Get PDF
    This paper appeared at the 8th Australasian Workshop on Health Informatics and Knowledge Management (HIKM 2015), Sydney, Australia, January 2015. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 164, Anthony Maeder and Jim Warren, Ed. Reproduction for academic, not-for profit purposes permitted provided this text is includedIn an effort to reduce the risk of sensitive data exposure in untrusted networks such as the public cloud, increasing attention has recently been given to encryption schemes that allow specific computations to occur on encrypted data, without the need for decryption. This relies on the fact that some encryption algorithms display the property of homomorphism, which allows them to manipulate data in a meaningful way while still in encrypted form. Such a framework would find particular relevance in Clinical Decision Support (CDS) applications deployed in the public cloud. CDS applications have an important computational and analytical role over confidential healthcare information with the aim of supporting decision-making in clinical practice. This review paper examines the history and current status of homomoprhic encryption and its potential for preserving the privacy of patient data underpinning cloud-based CDS applications
    corecore