81 research outputs found

    Availability Analysis of Redundant and Replicated Cloud Services with Bayesian Networks

    Full text link
    Due to the growing complexity of modern data centers, failures are not uncommon any more. Therefore, fault tolerance mechanisms play a vital role in fulfilling the availability requirements. Multiple availability models have been proposed to assess compute systems, among which Bayesian network models have gained popularity in industry and research due to its powerful modeling formalism. In particular, this work focuses on assessing the availability of redundant and replicated cloud computing services with Bayesian networks. So far, research on availability has only focused on modeling either infrastructure or communication failures in Bayesian networks, but have not considered both simultaneously. This work addresses practical modeling challenges of assessing the availability of large-scale redundant and replicated services with Bayesian networks, including cascading and common-cause failures from the surrounding infrastructure and communication network. In order to ease the modeling task, this paper introduces a high-level modeling formalism to build such a Bayesian network automatically. Performance evaluations demonstrate the feasibility of the presented Bayesian network approach to assess the availability of large-scale redundant and replicated services. This model is not only applicable in the domain of cloud computing it can also be applied for general cases of local and geo-distributed systems.Comment: 16 pages, 12 figures, journa

    Themelio: a new blockchain paradigm

    Get PDF
    Public blockchains hold great promise in building protocols that uphold security properties like transparency and consistency based on internal, incentivized cryptoeconomic mechanisms rather than preexisting trust in participants. Yet user-facing blockchain applications beyond "internal" immediate derivatives of blockchain incentive models, like cryptocurrency and decentralized finance, have not achieved widespread development or adoption. We propose that this is not primarily due to "engineering" problems in aspects such as scaling, but due to an overall lack of transferable endogenous trust—the twofold ability to uphold strong, internally-generated security guarantees and to translate them into application-level security. Yet we argue that blockchains, due to their foundation on game-theoretic incentive models rather than trusted authorities, are uniquely suited for building transferable endogenous trust, despite their current deficiencies. We then engage in a survey of existing public blockchains and the difficulties and crises that they have faced, noting that in almost every case, problems such as governance disputes and ecosystem inflexibility stem from a lack of transferable endogenous trust. Next, we introduce Themelio, a decentralized, public blockchain designed to support a new blockchain paradigm focused on transferable endogenous trust. Here, the blockchain is used as a low-level, stable, and simple root of trust, capable of sharing this trust with applications through scalable light clients. This contrasts with current blockchains, which are either applications or application execution platforms. We present evidence that this new paradigm is crucial to achieving flexible deployment of blockchain-based trust. We then describe the Themelio blockchain in detail, focusing on three areas key to its overall theme of transferable, strong endogenous trust: a traditional yet enhanced UTXO model with features that allow powerful programmability and light-client composability, a novel proof-of-stake system with unique cryptoeconomic guarantees against collusion, and Themelio's unique cryptocurrency "mel", which achieves stablecoin-like low volatility without sacrificing decentralization and security. Finally, we explore the wide variety of novel, partly off-chain applications enabled by Themelio's decoupled blockchain paradigm. This includes Astrape, a privacy-protecting off-chain micropayment network, Bitforest, a blockchain-based PKI that combines blockchain-backed security guarantees with the performance and administration benefits of traditional systems, as well as sketches of further applications

    Techniques intelligentes pour la gestion de la cohérence des Big data dans le cloud

    Get PDF
    Cette thèse aborde le problème de cohérence des données de Bigdata dans le cloud. En effet, nos recherches portent sur l’étude de différentes approches de cohérence adaptative dans le cloud et la proposition d’une nouvelle approche pour l’environnement Edge computing. La gestion de la cohérence a des conséquences majeures pour les systèmes de stockage distribués. Les modèles de cohérence forte nécessitent une synchronisation après chaque mise à jour, ce qui affecte considérablement les performances et la disponibilité du système. À l’inverse, les modèles à faible cohérence offrent de meilleures performances ainsi qu’une meilleure disponibilité des données. Cependant, ces derniers modèles peuvent tolérer trop d’incohérences temporaires sous certaines conditions. Par conséquent, une stratégie de cohérence adaptative est nécessaire pour ajuster, pendant l’exécution, le niveau de cohérence en fonction de la criticité des requêtes ou des données. Cette thèse apporte deux contributions. Dans la première contribution, une analyse comparative des approches de cohérence adaptative existantes est effectuée selon un ensemble de critères de comparaison définis. Ce type de synthèse fournit à l’utilisateur/chercheur une analyse comparative des performances des approches existantes. De plus, il clarifie la pertinence de ces approches pour les systèmes cloud candidats. Dans la seconde contribution, nous proposons MinidoteACE, un nouveau système adaptatif de cohérence qui est une version améliorée de Minidote, un système de cohérence causale pour les applications Edge. Contrairement à Minidote qui ne fournit que la cohérence causale, notre modèle permet aux applications d’exécuter également des requêtes avec des garanties de cohérence plus fortes. Des évaluations expérimentales montrent que le débit ne diminue que de 3,5 % à 10 % lors du remplacement d’une opération causale par une opération forte. Cependant, la latence de mise à jour augmente considérablement pour les opérations fortes jusqu’à trois fois pour une charge de travail où le taux des opérations de mise à jour est de 25 %

    Facets of Information Governance system at the South Africa Council for Social Service Professions

    Get PDF
    In many organisations, information governance (IG) is implemented in fragmented silos and does not add value. After realising this, South African Council for Social Service Professions (SACSSP), embarked on digital transformation process to modernise the organisation by implementing an information governance system. The SACSSP was experiencing challenges due to the lack of a cogent information technology (IT) system design and the disparaged registration, finance and external verification systems inherited that are not compatible with new system innovations to ensure effective and efficient operations. This study utilised the Control Objective for Information and Related Technology (COBIT) to develop an information governance system at SACSSP, with a view for entrenching a culture of good corporate governance. This critical emancipatory study used qualitative data collected through interviews, focus groups, observation, system and document analysis in response to research questions. The study was a participatory action research project that involved collaboration between the researcher and study participants in defining and solving the problem through needs assessment exercise. In order to address bias, the research findings were reviewed by peers to identify things that might have been missed or gaps that were not addressed. All three phases of participatory action research were followed, namely the ‘look phase’: getting to know stakeholders so that the problem is defined on their terms and the problem definition is reflective of the community context; the ‘think phase’: interpretation and analysis of what was learned in the ‘look phase’; and the ‘act phase’: planning, implementing, and evaluating, based on information collected and interpreted in the first two phases. Data was analysed thematically with the use of Atlas Ti 9 and presented in text, figures, pictures and diagrams. The key findings report on the processes taken by the SACSSP in identifying and implementing the IG system implementation, that is records management, information technology, content management, data governance, information security, data privacy, risk management, litigation readiness, regulatory compliance, long-term digital preservation, and even business intelligence. The results of the analysis suggest that integrated online system implementation, including system architecture can be used to address issues associated with information integrity in the present and near term, with proper IG policy and information & communication technology (ICT) infrastructure in place. It does not, however, guarantee reliability of information in the first place, and would have several limitations as a long term solution for maintaining digital records. The study established that there were no underlying technologies for the implementation of innovation technologies such as, artificial intelligence (AI). Core services of the organisation for social service professionals is dealing with registration related services such: as requirements for registrations; Foreign applications; Registrations fees; Restoration, Banking details. However, the SACSSP was on the right track towards digital transformation of the organisation. The study suggests a framework for information governance to assist professional organisations and board members to adopt a tailored governance system that would be designed according to their needs. It can be concluded that a successful IG system can be attained through adoption of principles and related accountabilities with a clear strategic direction that is supported by organisational business units. The study recommends that organisations need to make an emphasis of a holistic approach to IG in order to empower a board to coordinate and integrate decision making across the organisation.Information ScienceD. Litt. et Phil. (Information Science

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Coordination Protocols for Verifiable Consistency in Distributed Storage Systems

    Get PDF
    Achieving consistency in a highly available distributed storage system has been formally proven to be an impossible task when the system faces network partitions and faulty processes. The complexity is exacerbated when the system allows concurrent processes to send transactions to all the other servers and coordinate the consistent commitment of different transactions. In the event of a partition, each server may allow clients to request updates involving the current state of the data, which makes achieving replicated consistency challenging. To solve the inconsistency problems, several consensus protocols are used, but have strict requirements in order to make progress and are not guaranteed to ever converge to a single value. Additionally, the coordination required to achieve consistency after a partition will be extremely high as each node must compare transaction times and conflicting data with all other servers in the system. To address the inconsistency in distributed systems, this thesis proposes a new coordination protocol that utilizes four ideas in order for clients to verify the consistency of data: (1) a universal timestamp signatory to certify the global order of events, (2) a relative consistency indicator to determine relative consistency during partitions, (3) an operation-based recency-weighted conflict resolution algorithm to simplify coordination for achieving global consistency, and (4) a rejection-oriented distributed transaction commit protocol to eliminate any guarantees required by atomic commit protocols and verify local consistency. This thesis will evaluate and analyze various issues related to coordination and concurrency under network partitions. The experimental results demonstrate that the proposed methods provide a verifiable consistency to the servers of the distributed storage systems

    Data Storage and Dissemination in Pervasive Edge Computing Environments

    Get PDF
    Nowadays, smart mobile devices generate huge amounts of data in all sorts of gatherings. Much of that data has localized and ephemeral interest, but can be of great use if shared among co-located devices. However, mobile devices often experience poor connectivity, leading to availability issues if application storage and logic are fully delegated to a remote cloud infrastructure. In turn, the edge computing paradigm pushes computations and storage beyond the data center, closer to end-user devices where data is generated and consumed. Hence, enabling the execution of certain components of edge-enabled systems directly and cooperatively on edge devices. This thesis focuses on the design and evaluation of resilient and efficient data storage and dissemination solutions for pervasive edge computing environments, operating with or without access to the network infrastructure. In line with this dichotomy, our goal can be divided into two specific scenarios. The first one is related to the absence of network infrastructure and the provision of a transient data storage and dissemination system for networks of co-located mobile devices. The second one relates with the existence of network infrastructure access and the corresponding edge computing capabilities. First, the thesis presents time-aware reactive storage (TARS), a reactive data storage and dissemination model with intrinsic time-awareness, that exploits synergies between the storage substrate and the publish/subscribe paradigm, and allows queries within a specific time scope. Next, it describes in more detail: i) Thyme, a data storage and dis- semination system for wireless edge environments, implementing TARS; ii) Parsley, a flexible and resilient group-based distributed hash table with preemptive peer relocation and a dynamic data sharding mechanism; and iii) Thyme GardenBed, a framework for data storage and dissemination across multi-region edge networks, that makes use of both device-to-device and edge interactions. The developed solutions present low overheads, while providing adequate response times for interactive usage and low energy consumption, proving to be practical in a variety of situations. They also display good load balancing and fault tolerance properties.Resumo Hoje em dia, os dispositivos móveis inteligentes geram grandes quantidades de dados em todos os tipos de aglomerações de pessoas. Muitos desses dados têm interesse loca- lizado e efêmero, mas podem ser de grande utilidade se partilhados entre dispositivos co-localizados. No entanto, os dispositivos móveis muitas vezes experienciam fraca co- nectividade, levando a problemas de disponibilidade se o armazenamento e a lógica das aplicações forem totalmente delegados numa infraestrutura remota na nuvem. Por sua vez, o paradigma de computação na periferia da rede leva as computações e o armazena- mento para além dos centros de dados, para mais perto dos dispositivos dos utilizadores finais onde os dados são gerados e consumidos. Assim, permitindo a execução de certos componentes de sistemas direta e cooperativamente em dispositivos na periferia da rede. Esta tese foca-se no desenho e avaliação de soluções resilientes e eficientes para arma- zenamento e disseminação de dados em ambientes pervasivos de computação na periferia da rede, operando com ou sem acesso à infraestrutura de rede. Em linha com esta dico- tomia, o nosso objetivo pode ser dividido em dois cenários específicos. O primeiro está relacionado com a ausência de infraestrutura de rede e o fornecimento de um sistema efêmero de armazenamento e disseminação de dados para redes de dispositivos móveis co-localizados. O segundo diz respeito à existência de acesso à infraestrutura de rede e aos recursos de computação na periferia da rede correspondentes. Primeiramente, a tese apresenta armazenamento reativo ciente do tempo (ARCT), um modelo reativo de armazenamento e disseminação de dados com percepção intrínseca do tempo, que explora sinergias entre o substrato de armazenamento e o paradigma pu- blicação/subscrição, e permite consultas num escopo de tempo específico. De seguida, descreve em mais detalhe: i) Thyme, um sistema de armazenamento e disseminação de dados para ambientes sem fios na periferia da rede, que implementa ARCT; ii) Pars- ley, uma tabela de dispersão distribuída flexível e resiliente baseada em grupos, com realocação preventiva de nós e um mecanismo de particionamento dinâmico de dados; e iii) Thyme GardenBed, um sistema para armazenamento e disseminação de dados em redes multi-regionais na periferia da rede, que faz uso de interações entre dispositivos e com a periferia da rede. As soluções desenvolvidas apresentam baixos custos, proporcionando tempos de res- posta adequados para uso interativo e baixo consumo de energia, demonstrando serem práticas nas mais diversas situações. Estas soluções também exibem boas propriedades de balanceamento de carga e tolerância a faltas

    Scaling Distributed Ledgers and Privacy-Preserving Applications

    Get PDF
    This thesis proposes techniques aiming to make blockchain technologies and smart contract platforms practical by improving their scalability, latency, and privacy. This thesis starts by presenting the design and implementation of Chainspace, a distributed ledger that supports user defined smart contracts and execute user-supplied transactions on their objects. The correct execution of smart contract transactions is publicly verifiable. Chainspace is scalable by sharding state; it is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT). This thesis also introduces a family of replay attacks against sharded distributed ledgers targeting cross-shard consensus protocols; they allow an attacker, with network access only, to double-spend resources with minimal efforts. We then build Byzcuit, a new cross-shard consensus protocol that is immune to those attacks and that is tailored to run at the heart of Chainspace. Next, we propose FastPay, a high-integrity settlement system for pre-funded payments that can be used as a financial side-infrastructure for Chainspace to support low-latency retail payments. This settlement system is based on Byzantine Consistent Broadcast as its core primitive, foregoing the expenses of full atomic commit channels (consensus). The resulting system has extremely low-latency for both confirmation and payment finality. Finally, this thesis proposes Coconut, a selective disclosure credential scheme supporting distributed threshold issuance, public and private attributes, re-randomization, and multiple unlinkable selective attribute revelations. It ensures authenticity and availability even when a subset of credential issuing authorities are malicious or offline, and natively integrates with Chainspace to enable a number of scalable privacy-preserving applications

    It's about THYME: On the design and implementation of a time-aware reactive storage system for pervasive edge computing environments

    Get PDF
    This work was partially supported by Fundacao para a Ciencia e a Tecnologia (FCT-MCTES) through project DeDuCe (PTDC/CCI-COM/32166/2017), NOVA LINCS UIDB/04516/2020, and grant SFRH/BD/99486/2014; and by the European Union through project LightKone (grant agreement n. 732505).Nowadays, smart mobile devices generate huge amounts of data in all sorts of gatherings. Much of that data has localized and ephemeral interest, but can be of great use if shared among co-located devices. However, mobile devices often experience poor connectivity, leading to availability issues if application storage and logic are fully delegated to a remote cloud infrastructure. In turn, the edge computing paradigm pushes computations and storage beyond the data center, closer to end-user devices where data is generated and consumed, enabling the execution of certain components of edge-enabled systems directly and cooperatively on edge devices. In this article, we address the challenge of supporting reliable and efficient data storage and dissemination among co-located wireless mobile devices without resorting to centralized services or network infrastructures. We propose THYME, a novel time-aware reactive data storage system for pervasive edge computing environments, that exploits synergies between the storage substrate and the publish/subscribe paradigm. We present the design of THYME and elaborate a three-fold evaluation, through an analytical study, and both simulation and real world experimentations, characterizing the scenarios best suited for its use. The evaluation shows that THYME allows the notification and retrieval of relevant data with low overhead and latency, and also with low energy consumption, proving to be a practical solution in a variety of situations.publishersversionpublishe

    Scaling Private Collaborated Consortium Blockchains Using State Machine Replication Over Random Graphs

    Get PDF
    Blockchain technology has redefined the way the software industry\u27s core mechanisms operate. With recent generations of improvement observed in blockchain, the industry is surging ahead towards replacing the existing computing paradigms with consortium blockchain-enabled solutions. For this, there is much research observed which aims to make blockchain technology’s performance at par with existing systems. Most of the research involves the optimization of the consensus algorithms that govern the system. One of the major aspects of upcoming iterations in blockchain technology is making individual consortium blockchains collaborate with other consortium blockchains to validate operations on a common set of data shared among the systems. The traditional approach involves requiring all the organizations to run the consensus and validate the change. This approach is computationally expensive and reduces the modularity of the system. Also, the optimized consensus algorithms have their specific requirements and assumptions which if extended to all the organizations leads to a cluttered system with high magnitudes of dependencies.This thesis proposes an architecture that leverages the use of state machine replication extended to all the nodes of different organizations with seamless updates over a random graph network without involving all the nodes participating in the consensus. This also enables organizations to run their respective consensus algorithms depending on their requirements. This approach guarantees the finality of consistent data updates with reduced computations with high magnitudes of scalability and flexibility
    • …
    corecore