90 research outputs found

    Shai: Enforcing Data-Specific Policies with Near-Zero Runtime Overhead

    Full text link
    Data retrieval systems such as online search engines and online social networks must comply with the privacy policies of personal and selectively shared data items, regulatory policies regarding data retention and censorship, and the provider's own policies regarding data use. Enforcing these policies is difficult and error-prone. Systematic techniques to enforce policies are either limited to type-based policies that apply uniformly to all data of the same type, or incur significant runtime overhead. This paper presents Shai, the first system that systematically enforces data-specific policies with near-zero overhead in the common case. Shai's key idea is to push as many policy checks as possible to an offline, ahead-of-time analysis phase, often relying on predicted values of runtime parameters such as the state of access control lists or connected users' attributes. Runtime interception is used sparingly, only to verify these predictions and to make any remaining policy checks. Our prototype implementation relies on efficient, modern OS primitives for sandboxing and isolation. We present the design of Shai and quantify its overheads on an experimental data indexing and search pipeline based on the popular search engine Apache Lucene

    Comprehensive and Practical Policy Compliance in Data Retrieval Systems

    Get PDF
    Data retrieval systems such as online search engines and online social networks process many data items coming from different sources, each subject to its own data use policy. Ensuring compliance with these policies in a large and fast-evolving system presents a significant technical challenge since bugs, misconfigurations, or operator errors can cause (accidental) policy violations. To prevent such violations, researchers and practitioners develop policy compliance systems. Existing policy compliance systems, however, are either not comprehensive or not practical. To be comprehensive, a compliance system must be able to enforce users' policies regarding their personal privacy preferences, the service provider's own policies regarding data use such as auditing and personalization, and regulatory policies such as data retention and censorship. To be practical, a compliance system needs to meet stringent requirements: (1) runtime overhead must be low; (2) existing applications must run with few modifications; and (3) bugs, misconfigurations, or actions by unprivileged operators must not cause policy violations. In this thesis, we present the design and implementation of two comprehensive and practical compliance systems: Thoth and Shai. Thoth relies on pure runtime monitoring: it tracks data flows by intercepting processes' I/O, and then it checks the associated policies to allow only policy-compliant flows at runtime. Shai, on the other hand, combines offline analysis and light-weight runtime monitoring: it pushes as many policy checks as possible to an offline (flow) analysis by predicting the policies that data-handling processes will be subject to at runtime, and then it compiles those policies into a set of fine-grained I/O capabilities that can be enforced directly by the underlying operating system

    Measuring and Managing Answer Quality for Online Data-Intensive Services

    Full text link
    Online data-intensive services parallelize query execution across distributed software components. Interactive response time is a priority, so online query executions return answers without waiting for slow running components to finish. However, data from these slow components could lead to better answers. We propose Ubora, an approach to measure the effect of slow running components on the quality of answers. Ubora randomly samples online queries and executes them twice. The first execution elides data from slow components and provides fast online answers; the second execution waits for all components to complete. Ubora uses memoization to speed up mature executions by replaying network messages exchanged between components. Our systems-level implementation works for a wide range of platforms, including Hadoop/Yarn, Apache Lucene, the EasyRec Recommendation Engine, and the OpenEphyra question answering system. Ubora computes answer quality much faster than competing approaches that do not use memoization. With Ubora, we show that answer quality can and should be used to guide online admission control. Our adaptive controller processed 37% more queries than a competing controller guided by the rate of timeouts.Comment: Technical Repor

    A New Application of Demineralised Bone as a Tendon Graft

    Get PDF
    Tendon injuries present a challenging situation for orthopaedic surgeons. In severe injuries, a tendon transfer or a tendon graft is usually used. The aim is to find a biocompatible substance with mechanical and structural properties that replicate those of normal tendon. Because of its structural and mechanical properties, we propose that Demineralised Cortical Bone (DCB) can be used in the repair of tendon and ligament, as well as for the regeneration of the enthesis. I hypothesise that DCB grafted in a tendon environment will result in remodelling of the DCB into tendon and produce a fibrocartilaginous enthesis. DCB was prepared according to a modified Urist technique and the effect of gamma irradiation and/or freeze-drying on the tensile strength of the DCB was examined. In the second part of the study, four models of repair of a patellar tendon defect were examined for their strength to failure in order to identify a suitable technique for an in vivo animal model. In the final part of the study, a preclinical animal study was performed using DCB as a tendon graft to treat defect in sheep patellar tendon. Animals were allowed to mobilise immediately post-operatively and were sacrificed after 12 weeks. Force plate analyses, X-ray Radiographs, pQCT scans and histological analyses were performed. My results show that DCB remodelled into a ligament-like structure with evidence of neo-enthesis. No evidence of ossification; instead, DCB retrieved was cellularised and vascularised with evidence of crimp and integration into the patellar tendon. My results prove that DCB can be used as a biological tendon graft; this new application of demineralised bone has the potential for solving one of the most challenging injuries. Combined with the correct surgical techniques, early mobilization can be achieved, which results in the remodelling of the DCB into a normal tendon structure

    Acute compartment syndrome of the forearm as a rare complication of toxic epidermal necrolysis: a case report

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Toxic epidermal necrolysis lies within the spectrum of severe cutaneous adverse reactions induced by drugs, affecting skin and mucous membranes. Toxic epidermal necrolysis is considered a medical emergency as it is considered to be potentially fatal and carries a high mortality rate. To the best of our knowledge the association of toxic epidermal necrolysis and compartment syndrome has been rarely mentioned in the literature. In this case we treated the compartment syndrome promptly despite the poor general condition and skin status of our patient. Despite the poor skin condition, wound healing was uneventful with no complications.</p> <p>Case presentation</p> <p>A 62-year-old Caucasian man with a generalized macular-vesicular rash involving 90% of his body surface area and mucous membranes, as well as impaired renal and hepatic functions following ingestion of allopurinol for treatment of gout, was admitted to our hospital. Skin biopsies were taken and he was started on a steroid infusion. Within hours of admission, he developed acute compartment syndrome of the dominant forearm and hand.</p> <p>Conclusions</p> <p>Despite its rare incidence, toxic epidermal necrolysis is a condition with a high incidence of complications and mortality. Patients with severe conditions affecting a large degree of the skin surface area should be treated as promptly and effectively as patients with burns, with close monitoring and the anticipation that rare musculoskeletal complications might arise. The association of compartment syndrome and toxic epidermal necrolysis might lead to a rapid deterioration and fatal systemic involvement and multiple organ failures.</p

    Analytically-Driven Resource Management for Cloud-Native Microservices

    Full text link
    Resource management for cloud-native microservices has attracted a lot of recent attention. Previous work has shown that machine learning (ML)-driven approaches outperform traditional techniques, such as autoscaling, in terms of both SLA maintenance and resource efficiency. However, ML-driven approaches also face challenges including lengthy data collection processes and limited scalability. We present Ursa, a lightweight resource management system for cloud-native microservices that addresses these challenges. Ursa uses an analytical model that decomposes the end-to-end SLA into per-service SLA, and maps per-service SLA to individual resource allocations per microservice tier. To speed up the exploration process and avoid prolonged SLA violations, Ursa explores each microservice individually, and swiftly stops exploration if latency exceeds its SLA. We evaluate Ursa on a set of representative and end-to-end microservice topologies, including a social network, media service and video processing pipeline, each consisting of multiple classes and priorities of requests with different SLAs, and compare it against two representative ML-driven systems, Sinan and Firm. Compared to these ML-driven approaches, Ursa provides significant advantages: It shortens the data collection process by more than 128x, and its control plane is 43x faster than ML-driven approaches. At the same time, Ursa does not sacrifice resource efficiency or SLAs. During online deployment, Ursa reduces the SLA violation rate by 9.0% up to 49.9%, and reduces CPU allocation by up to 86.2% compared to ML-driven approaches

    Improving the scalability of cloud-based resilient database servers

    Get PDF
    Many rely now on public cloud infrastructure-as-a-service for database servers, mainly, by pushing the limits of existing pooling and replication software to operate large shared-nothing virtual server clusters. Yet, it is unclear whether this is still the best architectural choice, namely, when cloud infrastructure provides seamless virtual shared storage and bills clients on actual disk usage. This paper addresses this challenge with Resilient Asynchronous Commit (RAsC), an improvement to awell-known shared-nothing design based on the assumption that a much larger number of servers is required for scale than for resilience. Then we compare this proposal to other database server architectures using an analytical model focused on peak throughput and conclude that it provides the best performance/cost trade-off while at the same time addressing a wide range of fault scenarios

    Database Replication Using Generalized Snapshot Isolation

    Get PDF
    Generalized snapshot isolation extends snapshot isolation as used in Oracle and other databases in a manner suitable for replicated databases. While (conventional) snapshot isolation requires that transactions observe the “latest” snapshot of the database, generalized snapshot isolation allows the use of “older” snapshots, facilitating a replicated implementation. We show that many of the desirable properties of snapshot isolation remain. In particular, read-only transactions never block or abort and they do not cause update transactions to block or abort. Moreover, under certain assumptions on the transaction workload the execution is serializable. An implementation of generalized snapshot isolation can choose which past snapshot it uses. An interesting choice for a replicated database is prefix-consistent snapshot isolation, in which the snapshot contains at least all the writes of locally committed transactions. We present two implementations of prefix-consistent snapshot isolation. We conclude with an analytical performance model of one implementation, demonstrating the benefits, in particular reduced latency for read-only transactions, and showing that the potential downsides, in particular change in abort rate of update transactions, are limited
    • 

    corecore