1,314 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Efficient Model Checking: The Power of Randomness

    Get PDF

    Robust and Listening-Efficient Contention Resolution

    Full text link
    This paper shows how to achieve contention resolution on a shared communication channel using only a small number of channel accesses -- both for listening and sending -- and the resulting algorithm is resistant to adversarial noise. The shared channel operates over a sequence of synchronized time slots, and in any slot agents may attempt to broadcast a packet. An agent's broadcast succeeds if no other agent broadcasts during that slot. If two or more agents broadcast in the same slot, then the broadcasts collide and both broadcasts fail. An agent listening on the channel during a slot receives ternary feedback, learning whether that slot had silence, a successful broadcast, or a collision. Agents are (adversarially) injected into the system over time. The goal is to coordinate the agents so that each is able to successfully broadcast its packet. A contention-resolution protocol is measured both in terms of its throughput and the number of slots during which an agent broadcasts or listens. Most prior work assumes that listening is free and only tries to minimize the number of broadcasts. This paper answers two foundational questions. First, is constant throughput achievable when using polylogarithmic channel accesses per agent, both for listening and broadcasting? Second, is constant throughput still achievable when an adversary jams some slots by broadcasting noise in them? Specifically, for NN packets arriving over time and JJ jammed slots, we give an algorithm that with high probability in N+JN+J guarantees Θ(1)\Theta(1) throughput and achieves on average O(polylog(N+J))O(\texttt{polylog}(N+J)) channel accesses against an adaptive adversary. We also have per-agent high-probability guarantees on the number of channel accesses -- either O(polylog(N+J))O(\texttt{polylog}(N+J)) or O((J+1)polylog(N))O((J+1) \texttt{polylog}(N)), depending on how quickly the adversary can react to what is being broadcast

    RackBlox: A Software-Defined Rack-Scale Storage System with Network-Storage Co-Design

    Full text link
    Software-defined networking (SDN) and software-defined flash (SDF) have been serving as the backbone of modern data centers. They are managed separately to handle I/O requests. At first glance, this is a reasonable design by following the rack-scale hierarchical design principles. However, it suffers from suboptimal end-to-end performance, due to the lack of coordination between SDN and SDF. In this paper, we co-design the SDN and SDF stack by redefining the functions of their control plane and data plane, and splitting up them within a new architecture named RackBlox. RackBlox decouples the storage management functions of flash-based solid-state drives (SSDs), and allow the SDN to track and manage the states of SSDs in a rack. Therefore, we can enable the state sharing between SDN and SDF, and facilitate global storage resource management. RackBlox has three major components: (1) coordinated I/O scheduling, in which it dynamically adjusts the I/O scheduling in the storage stack with the measured and predicted network latency, such that it can coordinate the effort of I/O scheduling across the network and storage stack for achieving predictable end-to-end performance; (2) coordinated garbage collection (GC), in which it will coordinate the GC activities across the SSDs in a rack to minimize their impact on incoming I/O requests; (3) rack-scale wear leveling, in which it enables global wear leveling among SSDs in a rack by periodically swapping data, for achieving improved device lifetime for the entire rack. We implement RackBlox using programmable SSDs and switch. Our experiments demonstrate that RackBlox can reduce the tail latency of I/O requests by up to 5.8x over state-of-the-art rack-scale storage systems.Comment: 14 pages. Published in published in ACM SIGOPS 29th Symposium on Operating Systems Principles (SOSP'23

    Providing Private and Fast Data Access for Cloud Systems

    Get PDF
    Cloud storage and computing systems have become the backbone of many applications such as streaming (Netflix, YouTube), storage (Dropbox, Google Drive), and computing (Amazon Elastic Computing, Microsoft Azure). To address the ever growing demand for storage and computing requirements of these applications, cloud services are typically im-plemented over a large-scale distributed data storage system. Cloud systems are expected to provide the following two pivotal services for the users: 1) private content access and 2) fast content access. The goal of this thesis is to understand and address some of the challenges that need to be overcome to provide these two services. The first part of this thesis focuses on private data access in distributed systems. In particular, we contribute to the areas of Private Information Retrieval (PIR) and Private Computation (PC). In the PIR problem, there is a user who wishes to privately retrieve a subset of files belonging to a database stored on a single or multiple remote server(s). In the PC problem, the user wants to privately compute functions of a subset of files in the database. The PIR and PC problems seek the most efficient solutions with the minimum download cost that enable the user to retrieve or compute what it wants privately. We establish fundamental bounds on the minimum download cost required for guaran-teeing the privacy requirement in some practical and realistic settings of the PIR and PC problems and develop novel and efficient privacy-preserving algorithms for these settings. In particular, we study the single-server and multi-server settings of PIR in which the user initially has a random linear combination of a subset of files in the database as side in-formation, referred to as PIR with coded side information. We also study the multi-server setting of the PC in which the user wants to privately compute multiple linear combinations of a subset of files in the database, referred to as Private Linear Transformation. The second part of this thesis focuses on fast content access in distributed systems. In particular, we study the use of erasure coding to handle data access requests in distributed storage and computing systems. Service rate region is an important performance metric for coded distributed systems, which expresses the set of all data access request rates that can be simultaneously served by the system. In this context, two classes of problems arise: 1) characterizing the service rate region of a given storage scheme and finding the optimal request allocation, and 2) designing the underlying erasure code to handle a given desired service rate region. As contributions along the first class of problems, we characterize the service rate region of systems with some common coding schemes such as Simplex codes and Reed-Muller codes by introducing two novel techniques: 1) fractional matching and vertex cover on graph representation of codes, and 2) geometric representations of codes. Moreover, along the second class of code design, we establish some lower bounds on the minimum storage required to handle a desired service rate region for a coded distributed system and in some regimes, we design efficient storage schemes that provide the desired service rate region while minimizing the storage requirements

    Next-Generation Industrial Control System (ICS) Security:Towards ICS Honeypots for Defence-in-Depth Security

    Get PDF
    The advent of Industry 4.0 and smart manufacturing has led to an increased convergence of traditional manufacturing and production technologies with IP communications. Legacy Industrial Control System (ICS) devices are now exposed to a wide range of previously unconsidered threats, which must be considered to ensure the safe operation of industrial processes. Especially as cyberspace is presenting itself as a popular domain for nation-state operations, including against critical infrastructure. Honeypots are a well-known concept within traditional IT security, and they can enable a more proactive approach to security, unlike traditional systems. More work needs to be done to understand their usefulness within OT and critical infrastructure. This thesis advances beyond current honeypot implementations and furthers the current state-of-the-art by delivering novel ways of deploying ICS honeypots and delivering concrete answers to key research questions within the area. This is done by answering the question previously raised from a multitude of perspectives. We discuss relevant legislation, such as the UK Cyber Assessment Framework, the US NIST Framework for Improving Critical Infrastructure Cybersecurity, and associated industry-based standards and guidelines supporting operator compliance. Standards and guidance are used to frame a discussion on our survey of existing ICS honeypot implementations in the literature and their role in supporting regulatory objectives. However, these deployments are not always correctly configured and might differ from a real ICS. Based on these insights, we propose a novel framework towards the classification and implementation of ICS honeypots. This is underpinned by a study into the passive identification of ICS honeypots using Internet scanner data to identify honeypot characteristics. We also present how honeypots can be leveraged to identify when bespoke ICS vulnerabilities are exploited within the organisational network—further strengthening the case for honeypot usage within critical infrastructure environments. Additionally, we demonstrate a fundamentally different approach to the deployment of honeypots. By deploying it as a deterrent, to reduce the likelihood that an adversary interacts with a real system. This is important as skilled attackers are now adept at fingerprinting and avoiding honeypots. The results presented in this thesis demonstrate that honeypots can provide several benefits to the cyber security of and alignment to regulations within the critical infrastructure environment

    SoK: Collusion-resistant Multi-party Private Set Intersections in the Semi-honest Model

    Get PDF
    Private set intersection protocols allow two parties with private sets of data to compute the intersection between them without leaking other information about their sets. These protocols have been studied for almost 20 years, and have been significantly improved over time, reducing both their computation and communication costs. However, when more than two parties want to compute a private set intersection, these protocols are no longer applicable. While extensions exist to the multi-party case, these protocols are significantly less efficient than the two-party case. It remains an open question to design collusion-resistant multi-party private set intersection (MPSI) protocols that come close to the efficiency of two-party protocols. This work is made more difficult by the immense variety in the proposed schemes and the lack of systematization. Moreover, each new work only considers a small subset of previously proposed protocols, leaving out important developments from older works. Finally, MPSI protocols rely on many possible constructions and building blocks that have not been summarized. This work aims to point protocol designers to gaps in research and promising directions, pointing out common security flaws and sketching a frame of reference. To this end, we focus on the semi-honest model. We conclude that current MPSI protocols are not a one-size-fits-all solution, and instead there exist many protocols that each prevail in their own application setting

    Regulating competition in the digital network industry: A proposal for progressive ecosystem regulation

    Get PDF
    The digital sector is a cornerstone of the modern economy, and regulating digital enterprises can be considered the new frontier for regulators and competition authorities. To capture and address the competitive dynamics of digital markets we need to rethink our (competition) laws and regulatory strategies. The thesis develops new approaches to regulating digital markets by viewing them as part of a network industry. By combining insights from our experiences with existing regulation in telecommunications with insights from economics literature and management theory, the thesis concludes by proposing a new regulatory framework called ‘progressive ecosystem regulation’. The thesis is divided in three parts and has three key findings or contributions. The first part explains why digital platforms such as Google’s search engine, Meta’s social media platforms and Amazon’s Marketplace are prone to monopolization. Here, the thesis develops a theory of ‘digital natural monopoly’, which explains why competition in digital platform markets is likely to lead to concentration by its very nature.The second part of the thesis puts forward that competition in digital markets persists, even if there is monopoly in a market. Here, the thesis develops a conceptual framework for competition between digital ecosystems, which consists of group of actors and products. Digital enterprises compete to carve out a part of the digital network industry where they can exert control, and their strong position in a platform market can be used offensively or defensively to steer competition between ecosystems. The thesis then sets out four phases of ecosystem competition, which helps to explain when competition in the digital network industry is healthy and when it is likely to become problematic.The third and final part of the thesis brings together these findings and draws lessons from our experiences of regulating the network industry for telecommunications. Based on the insights developed in the thesis it puts forward a proposal for ‘progressive ecosystem regulation’. The purpose of this regulation is to protect and empower entrants from large digital ecosystems so that they can develop new products and innovate disruptively. This regulatory framework would create three regulatory pools: a heavily regulated, lightly regulated and entrant pool. The layered regulatory framework allows regulators to adjust who receives protection under the regulation and who faces the burdens relatively quickly, so that the regulatory framework reflects the fast pace of innovation and changing nature of digital markets. With this proposal, the thesis challenges and enriches our existing notions on regulation and specifically how we should regulate digital markets

    Durability and Availability of Erasure-Coded Storage Systems with Concurrent Maintenance

    Full text link
    This initial version of this document was written back in 2014 for the sole purpose of providing fundamentals of reliability theory as well as to identify the theoretical types of machinery for the prediction of durability/availability of erasure-coded storage systems. Since the definition of a "system" is too broad, we specifically focus on warm/cold storage systems where the data is stored in a distributed fashion across different storage units with or without continuous operation. The contents of this document are dedicated to a review of fundamentals, a few major improved stochastic models, and several contributions of my work relevant to the field. One of the contributions of this document is the introduction of the most general form of Markov models for the estimation of mean time to failure. This work was partially later published in IEEE Transactions on Reliability. Very good approximations for the closed-form solutions for this general model are also investigated. Various storage configurations under different policies are compared using such advanced models. Later in a subsequent chapter, we have also considered multi-dimensional Markov models to address detached drive-medium combinations such as those found in optical disk and tape storage systems. It is not hard to anticipate such a system structure would most likely be part of future DNA storage libraries. This work is partially published in Elsevier Reliability and System Safety. Topics that include simulation modelings for more accurate estimations are included towards the end of the document by noting the deficiencies of the simplified canonical as well as more complex Markov models, due mainly to the stationary and static nature of Markovinity. Throughout the document, we shall focus on concurrently maintained systems although the discussions will only slightly change for the systems repaired one device at a time.Comment: 58 pages, 20 figures, 9 tables. arXiv admin note: substantial text overlap with arXiv:1911.0032
    • …
    corecore