248 research outputs found

    CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS

    Get PDF
    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol

    Virtual time synchronization in distributed database systems

    Full text link
    Distributed systems synchronized by Virtual Time have been topics of recent interest. Virtual Time follows an optimistic philosophy relying on rollback for synchronization instead of abortion or blocKing Although many applications have been suggested as candidates for Virtual Time, few were simulated or implemented. This research reports on the first implementation and results of a Distributed Database Management System synchronized by virtual time. We argue that virtual time is a viable alternate concurrency control method for distributed database systems if its memory overhead can be absorbed

    Watermarking Vision-Language Pre-trained Models for Multi-modal Embedding as a Service

    Full text link
    Recent advances in vision-language pre-trained models (VLPs) have significantly increased visual understanding and cross-modal analysis capabilities. Companies have emerged to provide multi-modal Embedding as a Service (EaaS) based on VLPs (e.g., CLIP-based VLPs), which cost a large amount of training data and resources for high-performance service. However, existing studies indicate that EaaS is vulnerable to model extraction attacks that induce great loss for the owners of VLPs. Protecting the intellectual property and commercial ownership of VLPs is increasingly crucial yet challenging. A major solution of watermarking model for EaaS implants a backdoor in the model by inserting verifiable trigger embeddings into texts, but it is only applicable for large language models and is unrealistic due to data and model privacy. In this paper, we propose a safe and robust backdoor-based embedding watermarking method for VLPs called VLPMarker. VLPMarker utilizes embedding orthogonal transformation to effectively inject triggers into the VLPs without interfering with the model parameters, which achieves high-quality copyright verification and minimal impact on model performance. To enhance the watermark robustness, we further propose a collaborative copyright verification strategy based on both backdoor trigger and embedding distribution, enhancing resilience against various attacks. We increase the watermark practicality via an out-of-distribution trigger selection approach, removing access to the model training data and thus making it possible for many real-world scenarios. Our extensive experiments on various datasets indicate that the proposed watermarking approach is effective and safe for verifying the copyright of VLPs for multi-modal EaaS and robust against model extraction attacks. Our code is available at https://github.com/Pter61/vlpmarker

    PECCit: An Omniscient Debugger for Web Development

    Get PDF
    Debugging can be an extremely expensive and time-consuming task for a software developer. To find a bug, the developer typically needs to navigate backwards through infected states and symptoms of the bug to find the initial defect. Modern debugging tools are not designed for navigating back-in-time and typically require the user to jump through hoops by setting breakpoints, re-executing, and guessing where errors occur. Omniscient debuggers offer back-in-time debugging capabilities to make this task easier. These debuggers trace the program allowing the user to navigate forwards and backwards through the execution, examine variable histories, and visualize program data and control flow. Presented in this thesis is PECCit, an omniscient debugger designed for backend web development. PECCit traces web frameworks remotely and provides a browser-based IDE to navigate through the trace. The user can even watch a preview of the web page as it\u27s being built line-by-line using a novel feature called capturing. To evaluate, PECCit was used to debug real-world problems provided by users of two Content Management Systems: WordPress and Drupal. In these case studies, PECCit\u27s features and debugging capabilities are demonstrated and contrasted with standard debugging techniques

    Distributed Differential Privacy and Applications

    Get PDF
    Recent growth in the size and scope of databases has resulted in more research into making productive use of this data. Unfortunately, a significant stumbling block which remains is protecting the privacy of the individuals that populate these datasets. As people spend more time connected to the Internet, and conduct more of their daily lives online, privacy becomes a more important consideration, just as the data becomes more useful for researchers, companies, and individuals. As a result, plenty of important information remains locked down and unavailable to honest researchers today, due to fears that data leakages will harm individuals. Recent research in differential privacy opens a promising pathway to guarantee individual privacy while simultaneously making use of the data to answer useful queries. Differential privacy is a theory that provides provable information theoretic guarantees on what any answer may reveal about any single individual in the database. This approach has resulted in a flurry of recent research, presenting novel algorithms that can compute a rich class of computations in this setting. In this dissertation, we focus on some real world challenges that arise when trying to provide differential privacy guarantees in the real world. We design and build runtimes that achieve the mathematical differential privacy guarantee in the face of three real world challenges: securing the runtimes against adversaries, enabling readers to verify that the answers are accurate, and dealing with data distributed across multiple domains

    Policy interventions for safer, healthier people and communities

    Get PDF
    This report examines the effects of transportation policies on public health in three key areas, environment and environmental public health, community design and active transportation, and motor vehicle-related injuries and fatalities.Authored by a team of individuals headed by David R. Ragland, PhD, MPH Director of the Safe Transportation Research and Education Center (SafeTREC) at UC Berkeley and Phyllis Orrick, BA, SafeTREC Communications Director. -- p. iThis publication was made possible by cooperative agreement 3U58HM000216-05W1 between the Centers for Disease Control and Prevention and Partnership for Prevention and through contracts with Booz Allen Hamilton and the Safe Transportation Research and Education Center (SafeTREC) at UC Berkeley.1. Policies that improve the environment and environmental public health -- 2. Policies that enhance community design and promote active transportation -- 3. Policies that reduce motor vehicle-related injuries and fatalities.2011Other689

    Design Principles of Mobile Information Systems in the Digital Transformation of the Workplace - Utilization of Smartwatch-based Information Systems in the Corporate Context

    Get PDF
    During the last decades, smartwatches emerged as an innovative and promising technology and hit the consumer market due to the accessibility of affordable devices and predominant acceptance caused by the considerable similarity to common wristwatches. With the unique characteristics of permanent availability, unobtrusiveness, and hands-free operation, they can provide additional value in the corporate context. Thus, this thesis analyzes use cases for smartwatches in companies, elaborates on the design of smartwatch-based information systems, and covers the usability of smartwatch applications during the development of smartwatch-based information systems. It is composed of three research complexes. The first research complex focuses on the digital assistance of (mobile) employees who have to execute manual work and have been excluded so far from the benefits of the digitalization since they cannot operate hand-held devices. The objective is to design smartwatch-based information systems to support workflows in the corporate context, facilitate the daily work of numerous employees, and make processes more efficient for companies. During a design science research approach, smartwatch-based software artifacts are designed and evaluated in use cases of production, support, security service, as well as logistics, and a nascent design theory is proposed to complement theory according to mobile information system research. The evaluation shows that, on the one hand, smartwatches have enormous potential to assist employees with a fast and ubiquitous exchange of information, instant notifications, collaboration, and workflow guidance while they can be operated incidentally during manual work. On the other hand, the design of smartwatch-based information systems is a crucial factor for successful long-term deployment in companies, and especially limitations according to the small form-factor, general conditions, acceptance of the employees, and legal regulations have to be addressed appropriately. The second research complex addresses smartwatch-based information systems at the office workplace. This broadens and complements the view on the utilization of smartwatches in the corporate context in addition to the mobile context described in the first research complex. Though smartwatches are devices constructed for mobile use, the utilization in low mobile or stationary scenarios also has benefits due they exhibit the characteristic of a wearable computer and are directly connected to the employee’s body. Various sensors can perceive employee-, environment- and therefore context-related information and demand the employees’ attention with proactive notifications that are accompanied by a vibration. Thus, a smartwatch-based and gamified information system for health promotion at the office workplace is designed and evaluated. Research complex three provides a closer look at the topic of usability concerning applications running on smartwatches since it is a crucial factor during the development cycle. As a supporting element for the studies within the first and second research complex, a framework for the usability analysis of smartwatch applications is developed. For research, this thesis contributes a systemization of the state-of-the-art of smartwatch utilization in the corporate context, enabling and inhibiting influence factors of the smartwatch adoption in companies, and design principles as well as a nascent design theory for smartwatch-based information systems to support mobile employees executing manual work. For practice, this thesis contributes possible use cases for smartwatches in companies, assistance in decision-making for the introduction of smartwatch-based information systems in the corporate context with the Smartwatch Applicability Framework, situated implementations of a smartwatch-based information system for typical use cases, design recommendations for smartwatch-based information systems, an implementation of a smartwatch-based information system for the support of mobile employees executing manual work, and a usability-framework for smartwatches to automatically access usability of existing applications providing suggestions for usability improvement

    Architectural Support for Hypervisor-Level Intrusion Tolerance in MPSoCs

    Get PDF
    Increasingly, more aspects of our lives rely on the correctness and safety of computing systems, namely in the embedded and cyber-physical (CPS) domains, which directly affect the physical world. While systems have been pushed to their limits of functionality and efficiency, security threats and generic hardware quality have challenged their safety. Leveraging the enormous modular power, diversity and flexibility of these systems, often deployed in multi-processor systems-on-chip (MPSoC), requires careful orchestration of complex and heterogeneous resources, a task left to low-level software, e.g., hypervisors. In current architectures, this software forms a single point of failure (SPoF) and a worthwhile target for attacks: once compromised, adversaries can gain access to all information and full control over the platform and the environment it controls, for instance by means of privilege escalation and resource allocation. Currently, solutions to protect low-level software often rely on a simpler, underlying trusted layer which is often a SPoF itself and/or exhibits downgraded performance. Architectural hybridization allows for the introduction of trusted-trustworthy components, which combined with fault and intrusion tolerance (FIT) techniques leveraging replication, are capable of safely handling critical operations, thus eliminating SPoFs. Performing quorum-based consensus on all critical operations, in particular privilege management, ensures no compromised low-level software can single handedly manipulate privilege escalation or resource allocation to negatively affect other system resources by propagating faults or further extend an adversary’s control. However, the performance impact of traditional Byzantine fault tolerant state-machine replication (BFT-SMR) protocols is prohibitive in the context of MPSoCs due to the high costs of cryptographic operations and the quantity of messages exchanged. Furthermore, fault isolation, one of the key prerequisites in FIT, presents a complicated challenge to tackle, given the whole system resides within one chip in such platforms. There is so far no solution completely and efficiently addressing the SPoF issue in critical low-level management software. It is our aim, then, to devise such a solution that, additionally, reaps benefit of the tight-coupled nature of such manycore systems. In this thesis we present two architectures, using trusted-trustworthy mechanisms and consensus protocols, capable of protecting all software layers, specifically at low level, by performing critical operations only when a majority of correct replicas agree to their execution: iBFT and Midir. Moreover, we discuss ways in which these can be used at application level on the example of replicated applications sharing critical data structures. It then becomes possible to confine software-level faults and some hardware faults to the individual tiles of an MPSoC, converting tiles into fault containment domains, thus, enabling fault isolation and, consequently, making way to high-performance FIT at the lowest level

    Architectural Support for Hypervisor-Level Intrusion Tolerance in MPSoCs

    Get PDF
    Increasingly, more aspects of our lives rely on the correctness and safety of computing systems, namely in the embedded and cyber-physical (CPS) domains, which directly affect the physical world. While systems have been pushed to their limits of functionality and efficiency, security threats and generic hardware quality have challenged their safety. Leveraging the enormous modular power, diversity and flexibility of these systems, often deployed in multi-processor systems-on-chip (MPSoC), requires careful orchestration of complex and heterogeneous resources, a task left to low-level software, e.g., hypervisors. In current architectures, this software forms a single point of failure (SPoF) and a worthwhile target for attacks: once compromised, adversaries can gain access to all information and full control over the platform and the environment it controls, for instance by means of privilege escalation and resource allocation. Currently, solutions to protect low-level software often rely on a simpler, underlying trusted layer which is often a SPoF itself and/or exhibits downgraded performance. Architectural hybridization allows for the introduction of trusted-trustworthy components, which combined with fault and intrusion tolerance (FIT) techniques leveraging replication, are capable of safely handling critical operations, thus eliminating SPoFs. Performing quorum-based consensus on all critical operations, in particular privilege management, ensures no compromised low-level software can single handedly manipulate privilege escalation or resource allocation to negatively affect other system resources by propagating faults or further extend an adversary’s control. However, the performance impact of traditional Byzantine fault tolerant state-machine replication (BFT-SMR) protocols is prohibitive in the context of MPSoCs due to the high costs of cryptographic operations and the quantity of messages exchanged. Furthermore, fault isolation, one of the key prerequisites in FIT, presents a complicated challenge to tackle, given the whole system resides within one chip in such platforms. There is so far no solution completely and efficiently addressing the SPoF issue in critical low-level management software. It is our aim, then, to devise such a solution that, additionally, reaps benefit of the tight-coupled nature of such manycore systems. In this thesis we present two architectures, using trusted-trustworthy mechanisms and consensus protocols, capable of protecting all software layers, specifically at low level, by performing critical operations only when a majority of correct replicas agree to their execution: iBFT and Midir. Moreover, we discuss ways in which these can be used at application level on the example of replicated applications sharing critical data structures. It then becomes possible to confine software-level faults and some hardware faults to the individual tiles of an MPSoC, converting tiles into fault containment domains, thus, enabling fault isolation and, consequently, making way to high-performance FIT at the lowest level

    Scaling In-Memory databases on multicores

    Get PDF
    Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation
    • …
    corecore