99 research outputs found

    SoK: Diving into DAG-based Blockchain Systems

    Full text link
    Blockchain plays an important role in cryptocurrency markets and technology services. However, limitations on high latency and low scalability retard their adoptions and applications in classic designs. Reconstructed blockchain systems have been proposed to avoid the consumption of competitive transactions caused by linear sequenced blocks. These systems, instead, structure transactions/blocks in the form of Directed Acyclic Graph (DAG) and consequently re-build upper layer components including consensus, incentives, \textit{etc.} The promise of DAG-based blockchain systems is to enable fast confirmation (complete transactions within million seconds) and high scalability (attach transactions in parallel) without significantly compromising security. However, this field still lacks systematic work that summarises the DAG technique. To bridge the gap, this Systematization of Knowledge (SoK) provides a comprehensive analysis of DAG-based blockchain systems. Through deconstructing open-sourced systems and reviewing academic researches, we conclude the main components and featured properties of systems, and provide the approach to establish a DAG. With this in hand, we analyze the security and performance of several leading systems, followed by discussions and comparisons with concurrent (scaling blockchain) techniques. We further identify open challenges to highlight the potentiality of DAG-based solutions and indicate their promising directions for future research.Comment: Full versio

    Packet Forwarding Algorithms in a Line Network

    Full text link
    Abstract. We initiate a competitive analysis of packet forwarding poli-cies for maximum and average flow in a line network. We show that the policies Earliest Arrival and Furthest-To-Go are scalable, but not con-stant competitive, for maximum flow. We show that there is no constant competitive algorithm for average flow.

    Scheduling in Networks with Limited Buffers

    Get PDF
    In networks with limited buffer capacity, packet loss can occur at a link even when the average packet arrival rate is low compared to the link's speed. To offer strong loss-rateguarantees, ISPs may need to adopt stringent routing constraints to limit the load at the network links and the routing path length. However, to simultaneously maximize revenue, ISPs should be interested in scheduling algorithms that lead to the least stringent routing constraints. This work attempts to address the ISPs needs as follows. First, by proposing an algorithm that performs well (in terms of routing constraints) on networks of output queued (OQ) routers (that is, ideal routers), and second, by bounding the extra switch fabric speed and buffer capacity required for the emulationof these algorithms in combined input-output queued (CIOQ) routers.The first part of the thesis studies the problem of minimizing the maximum session loss rate in networks of OQ routers. It introduces the Rolling Priority algorithm, a local online scheduling algorithm that offers superior loss guarantees compared to FCFS/Drop Tail and FCFS/Random Drop. Rolling Priority has the following properties: (1) it does not favor any sessions over others at any link, (2) it ensures a proportion of packets from each session are subject to a negligibly small loss probability at every link along the session's path, and (3) maximizes the proportion of packets subject to negligible loss probability. The second part of the thesis studies the emulation of OQ routers using CIOQ. The OQ routers are equipped with a buffer of capacity B packets at every output. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid for the emulation of every OQ algorithm at speedup B, no CIOQ policy is valid at speedup less than the cubic root of B-2 when preemption is allowed. We also find that CCF, a well-studied CIOQ policy, is not valid at any speedup less than B. We then introduce a CIOQ policy CEH, that is valid at speedup greater than the square root of 2(B-1)

    On distributed ledger technology for the internet of things: design and applications

    Get PDF
    Distributed ledger technology (DLT) can used to store information in such a way that no individual or organisation can compromise its veracity, contrary to a traditional centralised ledger. This nascent technology has received a great deal of attention from both researchers and practitioners in recent years due to the vast array of open questions related to its design and the assortment novel applications it unlocks. In this thesis, we are especially interested in the design of DLTs suitable for application in the domain of the internet of things (IoT), where factors such as efficiency, performance and scalability are of paramount importance. This work confronts the challenges of designing IoT-oriented distributed ledgers through analysis of ledger properties, development of design tools and the design of a number of core protocol components. We begin by introducing a class of DLTs whose data structures consist of directed acyclic graphs (DAGs) and which possess properties that make them particularly well suited to IoT applications. With a focus on the DAG structure, we then present analysis through mathematical modelling and simulations which provides new insights to the properties of this class of ledgers and allows us to propose novel security enhancements. Next, we shift our focus away from the DAG structure itself to another open problem for DAG-based distributed ledgers, that of access control. Specifically, we present a networking approach which removes the need for an expensive and inefficient mechanism known as Proof of Work, solving an open problem for IoT-oriented distributed ledgers. We then draw upon our analysis of the DAG structure to integrate and test our new access control with other core components of the DLT. Finally, we present a mechanism for orchestrating the interaction between users of a DLT and its operators, seeking to improves the usability of DLTs for IoT applications. In the appendix, we present two projects also carried out during this PhD which showcase applications of this technology in the IoT domain.Open Acces

    From online social network analysis to a user-centric private sharing system

    Get PDF
    Online social networks (OSNs) have become a massive repository of data constructed from individuals’ inputs: posts, photos, feedbacks, locations, etc. By analyzing such data, meaningful knowledge is generated that can affect individuals’ beliefs, desires, happiness and choices—a data circulation started from individuals and ended in individuals! The OSN owners, as the one authority having full control over the stored data, make the data available for research, advertisement and other purposes. However, the individuals are missed in this circle while they generate the data and shape the OSN structure. In this thesis, we started by introducing approximation algorithms for finding the most influential individuals in a social graph and modeling the spread of information. To do so, we considered the communities of individuals that are shaped in a social graph. The social graph is extracted from the data stored and controlled centrally, which can cause privacy breaches and lead to individuals’ concerns. Therefore, we introduced UPSS: the user-centric private sharing system, in which the individuals are considered as the real data owners and provides secure and private data sharing on untrusted servers. The UPSS’s public API allows the application developers to implement applications as diverse as OSNs, document redaction systems with integrity properties, censorship-resistant systems, health care auditing systems, distributed version control systems with flexible access controls and a filesystem in userspace. Accessing users’ data is possible only with explicit user consent. We implemented the two later cases to show the applicability of UPSS. Supporting different storage models by UPSS enables us to have a local, remote and global filesystem in userspace with one unique core filesystem implementation and having it mounted with different block stores. By designing and implementing UPSS, we show that security and privacy can be addressed at the same time in the systems that need selective, secure and collaborative information sharing without requiring complete trust

    Resource Allocation in Networked and Distributed Environments

    Get PDF
    A central challenge in networked and distributed systems is resource management: how can we partition the available resources in the system across competing users, such that individual users are satisfied and certain system-wide objectives of interest are optimized? In this thesis, we deal with many such fundamental and practical resource allocation problems that arise in networked and distributed environments. We invoke two sophisticated paradigms -- linear programming and probabilistic methods -- and develop provably-good approximation algorithms for a diverse collection of applications. Our main contributions are as follows. Assignment problems: An assignment problem involves a collection of objects and locations, and a load value associated with each object-location pair. Our goal is to assign the objects to locations while minimizing various cost functions of the assignment. This setting models many applications in manufacturing, parallel processing, distributed storage, and wireless networks. We present a single algorithm for assignment which generalizes many classical assignment schemes known in the literature. Our scheme is derived through a fusion of linear algebra and randomization. In conjunction with other ideas, it leads to novel guarantees for multi-criteria parallel scheduling, broadcast scheduling, and social network modeling. Precedence constrained scheduling: We consider two precedence constrained scheduling problems, namely sweep scheduling and tree scheduling, which are inspired by emerging applications in high performance computing. Through a careful use of randomization, we devise the first approximation algorithms for these problems with near-optimal performance guarantees. Wireless communication: Wireless networks are prone to interference. This prohibits proximate network nodes from transmitting simultaneously, and introduces fundamental challenges in the design of wireless communication protocols. We develop fresh geometric insights for characterizing wireless interference. We combine our geometric analysis with linear programming and randomization, to derive near-optimal algorithms for latency minimization and throughput capacity estimation in wireless networks. In summary, the innovative use of linear programming and probabilistic techniques for resource allocation, and the novel ways of connecting them with application-specific ideas is the pivotal theme and the focal point of this thesis

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Optimizing Resource Management in Cloud Analytics Services

    Get PDF
    The fundamental challenge in the cloud today is how to build and optimize machine learning and data analytical services. Machine learning and data analytical platforms are changing computing infrastructure from expensive private data centers to easily accessible online services. These services pack user requests as jobs and run them on thousands of machines in parallel in geo-distributed clusters. The scale and the complexity of emerging jobs lead to increasing challenges for the clusters at all levels, from power infrastructure to system architecture and corresponding software framework design. These challenges come in many forms. Today's clusters are built on commodity hardware and hardware failures are unavoidable. Resource competition, network congestion, and mixed generations of hardware make the hardware environment complex and hard to model and predict. Such heterogeneity becomes a crucial roadblock for efficient parallelization on both the task level and job level. Another challenge comes from the increasing complexity of the applications. For example, machine learning services run jobs made up of multiple tasks with complex dependency structures. This complexity leads to difficulties in framework designs. The scale, especially when services span geo-distributed clusters, leads to another important hurdle for cluster design. Challenges also come from the power infrastructure. Power infrastructure is very expensive and accounts for more than 20% of the total costs to build a cluster. Power sharing optimization to maximize the facility utilization and smooth peak hour usages is another roadblock for cluster design. In this thesis, we focus on solutions for these challenges at the task level, on the job level, with respect to the geo-distributed data cloud design and for power management in colocation data centers. At the task level, a crucial hurdle to achieving predictable performance is stragglers, i.e., tasks that take significantly longer than expected to run. At this point, speculative execution has been widely adopted to mitigate the impact of stragglers in simple workloads. We apply straggler mitigation for approximation jobs for the first time. We present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. GRASS's design is based on the analysis of a model we develop to capture the optimal speculation levels for approximation jobs. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes show that GRASS increases accuracy of deadline-bound jobs by 47% and speeds up error-bound jobs by 38%. Moving from task level to job level, task level speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. Thus, we present Hopper, a job-level speculation-aware scheduler that integrates the tradeoffs associated with speculation into job scheduling decisions based on a model generalized from the task-level speculation model. We implement both centralized and decentralized prototypes of the Hopper scheduler and show that 50% (66%) improvements over state-of-the-art centralized (decentralized) schedulers and speculation strategies can be achieved through the coordination of scheduling and speculation. As computing resources move from local clusters to geo-distributed cloud services, we are expecting the same transformation for data storage. We study two crucial pieces of a geo-distributed data cloud system: data acquisition and data placement. Starting from developing the optimal algorithm for the case of a data cloud made up of a single data center, we propose a near-optimal, polynomial-time algorithm for a geo-distributed data cloud in general. We show, via a case study, that the resulting design, Datum, is near-optimal (within 1.6%) in practical settings. Efficient power management is a fundamental challenge for data centers when providing reliable services. Power oversubscription in data centers is very common and may occasionally trigger an emergency when the aggregate power demand exceeds the capacity. We study power capping solutions for handling such emergencies in a colocation data center, where the operator supplies power to multiple tenants. We propose a novel market mechanism based on supply function bidding, called COOP, to financially incentivize and coordinate tenants' power reduction for minimizing total performance loss while satisfying multiple power capping constraints. We demonstrate that COOP is "win-win", increasing the operator's profit (through oversubscription) and reducing tenants' costs (through financial compensation for their power reduction during emergencies).</p

    Design of new algorithms for gene network reconstruction applied to in silico modeling of biomedical data

    Get PDF
    Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111The root causes of disease are still poorly understood. The success of current therapies is limited because persistent diseases are frequently treated based on their symptoms rather than the underlying cause of the disease. Therefore, biomedical research is experiencing a technology-driven shift to data-driven holistic approaches to better characterize the molecular mechanisms causing disease. Using omics data as an input, emerging disciplines like network biology attempt to model the relationships between biomolecules. To this effect, gene co- expression networks arise as a promising tool for deciphering the relationships between genes in large transcriptomic datasets. However, because of their low specificity and high false positive rate, they demonstrate a limited capacity to retrieve the disrupted mechanisms that lead to disease onset, progression, and maintenance. Within the context of statistical modeling, we dove deeper into the reconstruction of gene co-expression networks with the specific goal of discovering disease-specific features directly from expression data. Using ensemble techniques, which combine the results of various metrics, we were able to more precisely capture biologically significant relationships between genes. We were able to find de novo potential disease-specific features with the help of prior biological knowledge and the development of new network inference techniques. Through our different approaches, we analyzed large gene sets across multiple samples and used gene expression as a surrogate marker for the inherent biological processes, reconstructing robust gene co-expression networks that are simple to explore. By mining disease-specific gene co-expression networks we come up with a useful framework for identifying new omics-phenotype associations from conditional expression datasets.In this sense, understanding diseases from the perspective of biological network perturbations will improve personalized medicine, impacting rational biomarker discovery, patient stratification and drug design, and ultimately leading to more targeted therapies.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e Informátic

    Algorithms to Explore the Structure and Evolution of Biological Networks

    Get PDF
    High-throughput experimental protocols have revealed thousands of relationships amongst genes and proteins under various conditions. These putative associations are being aggressively mined to decipher the structural and functional architecture of the cell. One useful tool for exploring this data has been computational network analysis. In this thesis, we propose a collection of novel algorithms to explore the structure and evolution of large, noisy, and sparsely annotated biological networks. We first introduce two information-theoretic algorithms to extract interesting patterns and modules embedded in large graphs. The first, graph summarization, uses the minimum description length principle to find compressible parts of the graph. The second, VI-Cut, uses the variation of information to non-parametrically find groups of topologically cohesive and similarly annotated nodes in the network. We show that both algorithms find structure in biological data that is consistent with known biological processes, protein complexes, genetic diseases, and operational taxonomic units. We also propose several algorithms to systematically generate an ensemble of near-optimal network clusterings and show how these multiple views can be used together to identify clustering dynamics that any single solution approach would miss. To facilitate the study of ancient networks, we introduce a framework called ``network archaeology'') for reconstructing the node-by-node and edge-by-edge arrival history of a network. Starting with a present-day network, we apply a probabilistic growth model backwards in time to find high-likelihood previous states of the graph. This allows us to explore how interactions and modules may have evolved over time. In experiments with real-world social and biological networks, we find that our algorithms can recover significant features of ancestral networks that have long since disappeared. Our work is motivated by the need to understand large and complex biological systems that are being revealed to us by imperfect data. As data continues to pour in, we believe that computational network analysis will continue to be an essential tool towards this end
    • …
    corecore