13 research outputs found

    Acta Cybernetica : Volume 23. Number 2.

    Get PDF

    Interoperabilnost uslužnog računarstva pomoću aplikacijskih programskih sučelja

    Get PDF
    Cloud computing paradigm is accepted by an increasing number of organizations due to significant financial savings. On the other hand, there are some issues that hinder cloud adoption. One of the most important problems is the vendor lock-in and lack of interoperability as its outcome. The ability to move data and application from one cloud offer to another and to use resources of multiple clouds is very important for cloud consumers.The focus of this dissertation is on the interoperability of commercial providers of platform as a service. This cloud model was chosen due to many incompatibilities among vendors and lack of the existing solutions. The main aim of the dissertation is to identify and address interoperability issues of platform as a service. Automated data migration between different providers of platform as a service is also an objective of this study.The dissertation has the following main contributions: first, the detailed ontology of resources and remote API operations of providers of platform as a service was developed. This ontology was used to semantically annotate web services that connect to providers remote APIs and define mappings between PaaS providers. A tool that uses defined semantic web services and AI planning technique to detect and try to resolve found interoperability problems was developed. The automated migration of data between providers of platform as a service is presented. Finally, a methodology for the detection of platform interoperability problems was proposed and evaluated in use cases.Zbog mogućnosti financijskih ušteda, sve veći broj poslovnih organizacija razmatra korištenje ili već koristi uslužno računarstvo. Međutim, postoje i problemi koji otežavaju primjenu ove nove paradigme. Jedan od najznačajnih problema je zaključavanje korisnika od strane pružatelja usluge i nedostatak interoperabilnosti. Za korisnike je jako važna mogućnost migracije podataka i aplikacija s jednog oblaka na drugi, te korištenje resursa od više pružatelja usluga.Fokus ove disertacije je interoperabilnost komercijalnih pružatelja platforme kao usluge. Ovaj model uslužnog računarstva je odabran zbog nekompatibilnosti različitih pružatelja usluge i nepostojanja postojećih rješenja. Glavni cilj disertacije je identifikacija i rješavanje problema interoperabilnosti platforme kao usluge. Automatizirana migracija podataka između različitih pružatelja platforme kao usluge je također jedan od ciljeva ovog istraživanja.Znanstveni doprinos ove disertacije je sljedeći: Najprije je razvijena detaljna ontologija resursa i operacija iz aplikacijskih programskih sučelja pružatelja platforme kao usluge. Spomenuta ontologija se koristi za semantičko označavanje web servisa koji pozivaju udaljene operacije aplikacijskih programskih sučelja pružatelja usluga, a sama ontologija definira i mapiranja između pružatelja platforme kao usluge. Također je razvijen alat koji otkriva i pokušava riješiti probleme interoperabilnosti korištenjem semantičkih web servisa i tehnika AI planiranja. Prikazana je i arhitektura za automatiziranu migraciju podataka između različitih pružatelja platforme kao usluge. Na kraju je predložena metodologija za otkrivanje problema interoperabilnosti koja je evaluirana pomoću slučajeva korištenja

    Interoperabilnost uslužnog računarstva pomoću aplikacijskih programskih sučelja

    Get PDF
    Cloud computing paradigm is accepted by an increasing number of organizations due to significant financial savings. On the other hand, there are some issues that hinder cloud adoption. One of the most important problems is the vendor lock-in and lack of interoperability as its outcome. The ability to move data and application from one cloud offer to another and to use resources of multiple clouds is very important for cloud consumers.The focus of this dissertation is on the interoperability of commercial providers of platform as a service. This cloud model was chosen due to many incompatibilities among vendors and lack of the existing solutions. The main aim of the dissertation is to identify and address interoperability issues of platform as a service. Automated data migration between different providers of platform as a service is also an objective of this study.The dissertation has the following main contributions: first, the detailed ontology of resources and remote API operations of providers of platform as a service was developed. This ontology was used to semantically annotate web services that connect to providers remote APIs and define mappings between PaaS providers. A tool that uses defined semantic web services and AI planning technique to detect and try to resolve found interoperability problems was developed. The automated migration of data between providers of platform as a service is presented. Finally, a methodology for the detection of platform interoperability problems was proposed and evaluated in use cases.Zbog mogućnosti financijskih ušteda, sve veći broj poslovnih organizacija razmatra korištenje ili već koristi uslužno računarstvo. Međutim, postoje i problemi koji otežavaju primjenu ove nove paradigme. Jedan od najznačajnih problema je zaključavanje korisnika od strane pružatelja usluge i nedostatak interoperabilnosti. Za korisnike je jako važna mogućnost migracije podataka i aplikacija s jednog oblaka na drugi, te korištenje resursa od više pružatelja usluga.Fokus ove disertacije je interoperabilnost komercijalnih pružatelja platforme kao usluge. Ovaj model uslužnog računarstva je odabran zbog nekompatibilnosti različitih pružatelja usluge i nepostojanja postojećih rješenja. Glavni cilj disertacije je identifikacija i rješavanje problema interoperabilnosti platforme kao usluge. Automatizirana migracija podataka između različitih pružatelja platforme kao usluge je također jedan od ciljeva ovog istraživanja.Znanstveni doprinos ove disertacije je sljedeći: Najprije je razvijena detaljna ontologija resursa i operacija iz aplikacijskih programskih sučelja pružatelja platforme kao usluge. Spomenuta ontologija se koristi za semantičko označavanje web servisa koji pozivaju udaljene operacije aplikacijskih programskih sučelja pružatelja usluga, a sama ontologija definira i mapiranja između pružatelja platforme kao usluge. Također je razvijen alat koji otkriva i pokušava riješiti probleme interoperabilnosti korištenjem semantičkih web servisa i tehnika AI planiranja. Prikazana je i arhitektura za automatiziranu migraciju podataka između različitih pružatelja platforme kao usluge. Na kraju je predložena metodologija za otkrivanje problema interoperabilnosti koja je evaluirana pomoću slučajeva korištenja

    LIPIcs

    Get PDF
    Fault-tolerant distributed algorithms play an important role in many critical/high-availability applications. These algorithms are notoriously difficult to implement correctly, due to asynchronous communication and the occurrence of faults, such as the network dropping messages or computers crashing. Nonetheless there is surprisingly little language and verification support to build distributed systems based on fault-tolerant algorithms. In this paper, we present some of the challenges that a designer has to overcome to implement a fault-tolerant distributed system. Then we review different models that have been proposed to reason about distributed algorithms and sketch how such a model can form the basis for a domain-specific programming language. Adopting a high-level programming model can simplify the programmer's life and make the code amenable to automated verification, while still compiling to efficiently executable code. We conclude by summarizing the current status of an ongoing language design and implementation project that is based on this idea

    Clouder : a flexible large scale decentralized object store

    Get PDF
    Programa Doutoral em Informática MAP-iLarge scale data stores have been initially introduced to support a few concrete extreme scale applications such as social networks. Their scalability and availability requirements often outweigh sacrificing richer data and processing models, and even elementary data consistency. In strong contrast with traditional relational databases (RDBMS), large scale data stores present very simple data models and APIs, lacking most of the established relational data management operations; and relax consistency guarantees, providing eventual consistency. With a number of alternatives now available and mature, there is an increasing willingness to use them in a wider and more diverse spectrum of applications, by skewing the current trade-off towards the needs of common business users, and easing the migration from current RDBMS. This is particularly so when used in the context of a Cloud solution such as in a Platform as a Service (PaaS). This thesis aims at reducing the gap between traditional RDBMS and large scale data stores, by seeking mechanisms to provide additional consistency guarantees and higher level data processing primitives in large scale data stores. The devised mechanisms should not hinder the scalability and dependability of large scale data stores. Regarding, higher level data processing primitives this thesis explores two complementary approaches: by extending data stores with additional operations such as general multi-item operations; and by coupling data stores with other existent processing facilities without hindering scalability. We address this challenges with a new architecture for large scale data stores, efficient multi item access for large scale data stores, and SQL processing atop large scale data stores. The novel architecture allows to find the right trade-offs among flexible usage, efficiency, and fault-tolerance. To efficient support multi item access we extend first generation large scale data store’s data models with tags and a multi-tuple data placement strategy, that allow to efficiently store and retrieve large sets of related data at once. For efficient SQL support atop scalable data stores we devise design modifications to existing relational SQL query engines, allowing them to be distributed. We demonstrate our approaches with running prototypes and extensive experimental evaluation using proper workloads.Os sistemas de armazenamento de dados de grande escala foram inicialmente desenvolvidos para suportar um leque restrito de aplicacões de escala extrema, como as redes sociais. Os requisitos de escalabilidade e elevada disponibilidade levaram a sacrificar modelos de dados e processamento enriquecidos e até a coerência dos dados. Em oposição aos tradicionais sistemas relacionais de gestão de bases de dados (SRGBD), os sistemas de armazenamento de dados de grande escala apresentam modelos de dados e APIs muito simples. Em particular, evidenciasse a ausência de muitas das conhecidas operacões de gestão de dados relacionais e o relaxamento das garantias de coerência, fornecendo coerência futura. Atualmente, com o número de alternativas disponíveis e maduras, existe o crescente interesse em usá-los num maior e diverso leque de aplicacões, orientando o atual compromisso para as necessidades dos típicos clientes empresariais e facilitando a migração a partir das atuais SRGBD. Isto é particularmente importante no contexto de soluções cloud como plataformas como um servic¸o (PaaS). Esta tese tem como objetivo reduzir a diferencça entre os tradicionais SRGDBs e os sistemas de armazenamento de dados de grande escala, procurando mecanismos que providenciem garantias de coerência mais fortes e primitivas com maior capacidade de processamento. Os mecanismos desenvolvidos não devem comprometer a escalabilidade e fiabilidade dos sistemas de armazenamento de dados de grande escala. No que diz respeito às primitivas com maior capacidade de processamento esta tese explora duas abordagens complementares : a extensão de sistemas de armazenamento de dados de grande escala com operacões genéricas de multi objeto e a junção dos sistemas de armazenamento de dados de grande escala com mecanismos existentes de processamento e interrogac¸ ˜ao de dados, sem colocar em causa a escalabilidade dos mesmos. Para isso apresent´amos uma nova arquitetura para os sistemas de armazenamento de dados de grande escala, acesso eficiente a m´ultiplos objetos, e processamento de SQL sobre sistemas de armazenamento de dados de grande escala. A nova arquitetura permite encontrar os compromissos adequados entre flexibilidade, eficiˆencia e tolerˆancia a faltas. De forma a suportar de forma eficiente o acesso a m´ultiplos objetos estendemos o modelo de dados de sistemas de armazenamento de dados de grande escala da primeira gerac¸ ˜ao com palavras-chave e definimos uma estrat´egia de colocac¸ ˜ao de dados para m´ultiplos objetos que permite de forma eficiente armazenar e obter grandes quantidades de dados de uma s´o vez. Para o suporte eficiente de SQL sobre sistemas de armazenamento de dados de grande escala, analisámos a arquitetura dos motores de interrogação de SRGBDs e fizemos alterações que permitem que sejam distribuídos. As abordagens propostas são demonstradas através de protótipos e uma avaliacão experimental exaustiva recorrendo a cargas adequadas baseadas em aplicações reais

    Using Virtualisation to Protect Against Zero-Day Attacks

    Get PDF
    Bal, H.E. [Promotor]Bos, H.J. [Copromotor

    Measuring the Semantic Integrity of a Process Self

    Get PDF
    The focus of the thesis is the definition of a framework to protect a process from attacks against the process self, i.e. attacks that alter the expected behavior of the process, by integrating static analysis and run-time monitoring. The static analysis of the program returns a description of the process self that consists of a context-free grammar, which defines the legal system call traces, and a set of invariants on process variables that hold when a system call is issued. Run-time monitoring assures the semantic integrity of the process by checking that its behavior is coherent with the process self returned by the static analysis. The proposed framework can also cover kernel integrity to protect the process from attacks from the kernel-level. The implementation of the run-time monitoring is based upon introspection, a technique that analyzes the state of a computer to rebuild and check the consistency of kernel or user-level data structures. The ability of observing the run-time values of variables reduces the complexity of the static analysis and increases the amount of information that can be extracted on the run-time behavior of the process. To achieve transparency of the controls for the process while avoiding the introduction of special purpose hardware units that access the memory, the architecture of the run-time monitoring adopts virtualization technology and introduces two virtual machines, the monitored and the introspection virtual machines. This approach increases the overall robustness because a distinct virtual machine, the introspection virtual machine, applies introspection in a transparent way both to verify the kernel integrity and to retrieve the status of the process to check the process self. After presenting the framework and its implementation, the thesis discusses some of its applications to increase the security of a computer network. The first application of the proposed framework is the remote attestation of the semantic integrity of a process. Then, the thesis describes a set of extensions to the framework to protect a process from physical attacks by running an obfuscated version of the process code. Finally, the thesis generalizes the framework to support the efficient sharing of an information infrastructure among users and applications with distinct security and reliability requirements by introducing highly parallel overlays
    corecore