178 research outputs found

    Enforcing QoS in scientific workflow systems enacted over Cloud infrastructures

    Get PDF
    AbstractThe ability to support Quality of Service (QoS) constraints is an important requirement in some scientific applications. With the increasing use of Cloud computing infrastructures, where access to resources is shared, dynamic and provisioned on-demand, identifying how QoS constraints can be supported becomes an important challenge. However, access to dedicated resources is often not possible in existing Cloud deployments and limited QoS guarantees are provided by many commercial providers (often restricted to error rate and availability, rather than particular QoS metrics such as latency or access time). We propose a workflow system architecture which enforces QoS for the simultaneous execution of multiple scientific workflows over a shared infrastructure (such as a Cloud environment). Our approach involves multiple pipeline workflow instances, with each instance having its own QoS requirements. These workflows are composed of a number of stages, with each stage being mapped to one or more physical resources. A stage involves a combination of data access, computation and data transfer capability. A token bucket-based data throttling framework is embedded into the workflow system architecture. Each workflow instance stage regulates the amount of data that is injected into the shared resources, allowing for bursts of data to be injected while at the same time providing isolation of workflow streams. We demonstrate our approach by using the Montage workflow, and develop a Reference net model of the workflow

    Design and evaluation of elastic media resource allocation algorithms using CloudSim extensions

    Get PDF
    With the maturity of Cloud computing comes research into converting a range of traditionally best effort programs into cloud-enabled services. One such service currently under investigation in the Elastic Media Distribution (EMD) project, is how to enable qualitative, reliable and scalable real-time media collaboration services using proven Cloud technology. While existing best-effort solutions provide plenty of features, they do not provide the quality guarantees and reliability required for critical services in globally distributed corporations. On the other hand, some pricey dedicated solutions do offer these low-delay, reliable cooperation services, but without the benefits that clouds can bring in terms of scalability. In this paper we describe results attained in the EMD project on novel resource provisioning algorithms for a mixture of end-to-end Audio/Video streams with file-based transfers, allowing for configurable trade-offs between service response time and cost. We extended the CloudSim simulator with models allowing us to simulate collaborative interactive sessions (more specifically educational real-time collaboration), and evaluated the performance of our proposed provisioning heuristics. The results show that the proposed dynamic algorithm allows for automated cost-performance trade-off by reducing average total Virtual Machine (VM) cost by a maximum of 58% compared to more naive approaches, while keeping average time for clients to join a meeting in line

    Koostööäriprotsesside läbiviimine plokiahelal: süsteem

    Get PDF
    Tänapäeval peavad organisatsioonid tegema omavahel koostööd, et kasutada ära üksteise täiendavaid võimekusi ning seeläbi pakkuda oma klientidele parimaid tooteid ja teenuseid. Selleks peavad organisatsioonid juhtima äriprotsesse, mis ületavad nende organisatsioonilisi piire. Selliseid protsesse nimetatakse koostööäriprotsessideks. Üks peamisi takistusi koostööäriprotsesside elluviimisel on osapooltevahelise usalduse puudumine. Plokiahel loob detsentraliseeritud pearaamatu, mida ei saa võltsida ning mis toetab nutikate lepingute täitmist. Nii on võimalik teha koostööd ebausaldusväärsete osapoolte vahel ilma kesksele asutusele tuginemata. Paraku on aga äriprotsesside läbiviimine selliseid madala taseme plokiahela elemente kasutades tülikas, veaohtlik ja erioskusi nõudev. Seevastu juba väljakujunenud äriprotsesside juhtimissüsteemid (Business Process Management System – BPMS) pakuvad käepäraseid abstraheeringuid protsessidele orienteeritud rakenduste kiireks arendamiseks. Käesolev doktoritöö käsitleb koostööäriprotsesside automatiseeritud läbiviimist plokiahela tehnoloogiat kasutades, kombineerides traditsioonliste BPMS- ide arendusvõimalused plokiahelast tuleneva suurendatud usaldusega. Samuti käsitleb antud doktoritöö küsimust, kuidas pakkuda tuge olukordades, milles uued osapooled võivad jooksvalt protsessiga liituda, mistõttu on vajalik tagada paindlikkus äriprotsessi marsruutimisloogika muutmise osas. Doktoritöö uurib tarkvaraarhitektuurilisi lähenemisviise ja modelleerimise kontseptsioone, pakkudes välja disainipõhimõtteid ja nõudeid, mida rakendatakse uudsel plokiahela baasil loodud äriprotsessi juhtimissüsteemil CATERPILLAR. CATERPILLAR-i süsteem toetab kahte lähenemist plokiahelal põhinevate protsesside rakendamiseks, läbiviimiseks ja seireks: kompileeritud ja tõlgendatatud. Samuti toetab see kahte kontrollitud paindlikkuse mehhanismi, mille abil saavad protsessis osalejad ühiselt otsustada, kuidas protsessi selle täitmise ajal uuendada ning anda ja eemaldada osaliste juurdepääsuõigusi.Nowadays, organizations are pressed to collaborate in order to take advantage of their complementary capabilities and to provide best-of-breed products and services to their customers. To do so, organizations need to manage business processes that span beyond their organizational boundaries. Such processes are called collaborative business processes. One of the main roadblocks to implementing collaborative business processes is the lack of trust between the participants. Blockchain provides a decentralized ledger that cannot be tamper with, that supports the execution of programs called smart contracts. These features allow executing collaborative processes between untrusted parties and without relying on a central authority. However, implementing collaborative business processes in blockchain can be cumbersome, error-prone and requires specialized skills. In contrast, established Business Process Management Systems (BPMSs) provide convenient abstractions for rapid development of process-oriented applications. This thesis addresses the problem of automating the execution of collaborative business processes on top of blockchain technology in a way that takes advantage of the trust-enhancing capabilities of this technology while offering the development convenience of traditional BPMSs. The thesis also addresses the question of how to support scenarios in which new parties may be onboarded at runtime, and in which parties need to have the flexibility to change the default routing logic of the business process. We explore architectural approaches and modelling concepts, formulating design principles and requirements that are implemented in a novel blockchain-based BPMS named CATERPILLAR. The CATERPILLAR system supports two methods to implement, execute and monitor blockchain-based processes: compiled and interpreted. It also supports two mechanisms for controlled flexibility; i.e., participants can collectively decide on updating the process during its execution as well as granting and revoking access to parties.https://www.ester.ee/record=b536494

    Accelerating Malware Detection via a Graphics Processing Unit

    Get PDF
    Real-time malware analysis requires processing large amounts of data storage to look for suspicious files. This is a time consuming process that (requires a large amount of processing power) often affecting other applications running on a personal computer. This research investigates the viability of using Graphic Processing Units (GPUs), present in many personal computers, to distribute the workload normally processed by the standard Central Processing Unit (CPU). Three experiments are conducted using an industry standard GPU, the NVIDIA GeForce 9500 GT card. The goal of the first experiment is to find the optimal number of threads per block for calculating MD5 file hash. The goal of the second experiment is to find the optimal number of threads per block for searching an MD5 hash database for matches. In the third experiment, the size of the executable, executable type (benign or malicious), and processing hardware are varied in a full factorial experimental design. The experiment records if the file is benign or malicious and measure the time required to identify the executable. This information can be used to analyze the performance of GPU hardware against CPU hardware. Experimental results show that a GPU can calculate a MD5 signature hash and scan a database of malicious signatures 82% faster than a CPU for files between 0 96 kB. If the file size is increased to 97 - 192 kB the GPU is 85% faster than the CPU. This demonstrates that the GPU can provide a greater performance increase over a CPU. These results could help achieve faster anti-malware products, faster network intrusion detection system response times, and faster firewall applications

    Adding value to Linked Open Data using a multidimensional model approach based on the RDF Data Cube vocabulary

    Get PDF
    Most organisations using Open Data currently focus on data processing and analysis. However, although Open Data may be available online, these data are generally of poor quality, thus discouraging others from contributing to and reusing them. This paper describes an approach to publish statistical data from public repositories by using Semantic Web standards published by the W3C, such as RDF and SPARQL, in order to facilitate the analysis of multidimensional models. We have defined a framework based on the entire lifecycle of data publication including a novel step of Linked Open Data assessment and the use of external repositories as knowledge base for data enrichment. As a result, users are able to interact with the data generated according to the RDF Data Cube vocabulary, which makes it possible for general users to avoid the complexity of SPARQL when analysing data. The use case was applied to the Barcelona Open Data platform and revealed the benefits of the application of our approach, such as helping in the decision-making process.This work was supported in part by the Spanish Ministry of Science, Innovation and Universities through the Project ECLIPSE-UA under grant RTI2018-094283-B-C32

    A Framework for Developing Context-Aware Systems

    Get PDF
    In ubiquitous computing the environment constraints are often regarded as static and software applications are allowed to function in a mobile ecospace. However, in context-aware systems the environment attributes of software applications are dynamically changing. This dynamism of contexts must be accounted for in order to provide the true intended effect on the application of services. Consequently, context-aware software applications should perceive their context in a continuous manner and seamlessly adapt to it. This thesis investigates the process of constructing context-aware applications and identifies the main challenges in this domain. The two principal requirements are (1) formally defining what context is and expressing the enclosed semantics, (2) formally defining dynamic compositions of adaptations and triggering their responses to changes in the environment context. This thesis proposes a component-based architecture for a Context-aware Framework that would be used to bring awareness capabilities into applications. Two languages are formally designed. One is to formally express situations, leading to a context reasoner, and another is to formally express workflow, leading to timely triggering of reactions and enforcing policies. With these formalisms and a component design that can be formalized, the thesis work fulfills a formal approach to construct context-aware applications. A proof-ofconcept case study is implemented to examine the expressiveness of the framework design and test its implementation

    Open Source Software Governance: A Case Study Evaluation of Supply Chain Management Best Practices

    Get PDF
    Corporate open source governance aims to manage the increasing use of free/libre and open source software (FLOSS) in companies. To avoid the risks of the ungoverned use, companies need to establish processes addressing license compliance, component approval, and supply chain management (SCM). We proposed a set of industry-inspired best practices for supply chain management organized into a handbook. To evaluate the handbook, we ran a one-year case study at a large enterprise software company, where we performed semi-structured interviews, workshops, and direct observations. We assessed the initial situation of open source governance, the implementation of the proposed SCM best practices, and the resulting impact. We report the results of this study by demonstrating and discussing the artifacts created while the case study company implemented the SCM-focused governance process. The evaluation case study enabled the real-life application and the improvement of the proposed best practices
    corecore