13,941 research outputs found

    Integrating BOINC with Microsoft Excel: A case study

    Get PDF
    The convergence of conventional Grid computing with public resource computing (PRC) offers potential benefits in the enterprise setting. For this work we took the popular PRC toolkit BOINC and used it to execute a previously monolithic Microsoft Excel financial model across several commodity computers. Our experience indicates that speedup approaching linear may be realised for certain scenarios, and that this approach offers a viable route to leveraging idle desktop PCs in the enterprise

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Overcoming integration challenges in organisations with operational technology

    Get PDF
    Competitive advantage is traditionally an outcome of leveraging people, processes and technologies. Today organisations have several technologies with disparate information. Information integration may assist organisations to remain competitive. Organisations that have technology which manage or control assets have particular integration challenges compared to organisations with corporate business areas. This is because organisations do not view technology managing infrastructure assets in the same way as managing functions such as finance, retail and human resources. The paper defines a current, asset management based taxonomy for organisations integrating Operational and Information Technology. It identifies a number of challenges, such as the commitment to information integration, organisation-wide governance and architectural approaches as well as the aligning of operational open standards with existing information technology standards. Furthermore it highlights opportunities for further research in the area

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Overcoming integration challenges in organisations with operational technology

    Get PDF
    Competitive advantage is traditionally an outcome of leveraging people, processes and technologies. Today organisations have several technologies with disparate information. Information integration may assist organisations to remain competitive. Organisations that have technology which manage or control assets have particular integration challenges compared to organisations with corporate business areas. This is because organisations do not view technology managing infrastructure assets in the same way as managing functions such as finance, retail and human resources. The paper defines a current, asset management based taxonomy for organisations integrating Operational and Information Technology. It identifies a number of challenges, such as the commitment to information integration, organisation-wide governance and architectural approaches as well as the aligning of operational open standards with existing information technology standards. Furthermore it highlights opportunities for further research in the area

    Simulation of complex environments:the Fuzzy Cognitive Agent

    Get PDF
    The world is becoming increasingly competitive by the action of liberalised national and global markets. In parallel these markets have become increasingly complex making it difficult for participants to optimise their trading actions. In response, many differing computer simulation techniques have been investigated to develop either a deeper understanding of these evolving markets or to create effective system support tools. In this paper we report our efforts to develop a novel simulation platform using fuzzy cognitive agents (FCA). Our approach encapsulates fuzzy cognitive maps (FCM) generated on the Matlab Simulink platform within commercially available agent software. We firstly present our implementation of Matlab Simulink FCMs and then show how such FCMs can be integrated within a conceptual FCA architecture. Finally we report on our efforts to realise an FCA by the integration of a Matlab Simulink based FCM with the Jack Intelligent Agent Toolkit
    • 

    corecore