20 research outputs found

    A New Application of Demineralised Bone as a Tendon Graft

    Get PDF
    Tendon injuries present a challenging situation for orthopaedic surgeons. In severe injuries, a tendon transfer or a tendon graft is usually used. The aim is to find a biocompatible substance with mechanical and structural properties that replicate those of normal tendon. Because of its structural and mechanical properties, we propose that Demineralised Cortical Bone (DCB) can be used in the repair of tendon and ligament, as well as for the regeneration of the enthesis. I hypothesise that DCB grafted in a tendon environment will result in remodelling of the DCB into tendon and produce a fibrocartilaginous enthesis. DCB was prepared according to a modified Urist technique and the effect of gamma irradiation and/or freeze-drying on the tensile strength of the DCB was examined. In the second part of the study, four models of repair of a patellar tendon defect were examined for their strength to failure in order to identify a suitable technique for an in vivo animal model. In the final part of the study, a preclinical animal study was performed using DCB as a tendon graft to treat defect in sheep patellar tendon. Animals were allowed to mobilise immediately post-operatively and were sacrificed after 12 weeks. Force plate analyses, X-ray Radiographs, pQCT scans and histological analyses were performed. My results show that DCB remodelled into a ligament-like structure with evidence of neo-enthesis. No evidence of ossification; instead, DCB retrieved was cellularised and vascularised with evidence of crimp and integration into the patellar tendon. My results prove that DCB can be used as a biological tendon graft; this new application of demineralised bone has the potential for solving one of the most challenging injuries. Combined with the correct surgical techniques, early mobilization can be achieved, which results in the remodelling of the DCB into a normal tendon structure

    Improving the scalability of cloud-based resilient database servers

    Get PDF
    Many rely now on public cloud infrastructure-as-a-service for database servers, mainly, by pushing the limits of existing pooling and replication software to operate large shared-nothing virtual server clusters. Yet, it is unclear whether this is still the best architectural choice, namely, when cloud infrastructure provides seamless virtual shared storage and bills clients on actual disk usage. This paper addresses this challenge with Resilient Asynchronous Commit (RAsC), an improvement to awell-known shared-nothing design based on the assumption that a much larger number of servers is required for scale than for resilience. Then we compare this proposal to other database server architectures using an analytical model focused on peak throughput and conclude that it provides the best performance/cost trade-off while at the same time addressing a wide range of fault scenarios

    Scalable transactions in the cloud: partitioning revisited

    Get PDF
    Lecture Notes in Computer Science, 6427Cloud computing is becoming one of the most used paradigms to deploy highly available and scalable systems. These systems usually demand the management of huge amounts of data, which cannot be solved with traditional nor replicated database systems as we know them. Recent solutions store data in special key-value structures, in an approach that commonly lacks the consistency provided by transactional guarantees, as it is traded for high scalability and availability. In order to ensure consistent access to the information, the use of transactions is required. However, it is well-known that traditional replication protocols do not scale well for a cloud environment. Here we take a look at current proposals to deploy transactional systems in the cloud and we propose a new system aiming at being a step forward in achieving this goal. We proceed to focus on data partitioning and describe the key role it plays in achieving high scalability.This work has been partially supported by the Spanish Government under grant TIN2009-14460-C03-02 and by the Spanish MEC under grant BES-2007-17362 and by project ReD Resilient Database Clusters (PDTC/EIA-EIA/109044/2008)

    Specification and Implementation of Dynamic Web Site Benchmarks

    Get PDF
    The absence of benchmarks for Web sites with dynamic content has been a major impediment to research in this area. We describe three benchmarks for evaluating the performance of Web sites with dynamic content. The benchmarks model three common types of dynamic content Web sites with widely varying application characteristics: an online bookstore, an auction site, and a bulletin board. For the online bookstore, we use the TPCW specification. For the auction site and the bulletin board, we provide our own specification, modeled after ebay.com and slahdot.org, respectively. For each benchmark we describe the design of the database and the interactions provided by the Web server. We have implemented these three benchmarks with a variety of methods for building dynamic-content applications, including PHP, Java servlets and EJB (Enterprise Java Beans). In all cases, we use commonly used open-source software. We also provide a client emulator that allows a dynamic content Web server to be driven with various workloads. Our implementations are available freely from our Web site for other researchers to use. These benchmarks can be used for research in dynamic Web and application server design. In this paper, we provide one example of such possible use, namely discovering the bottlenecks for applications in a particular server configuration. Other possible uses include studies of clustering and caching for dynamic content, comparison of different application implementation methods, and studying the effect of different workload characteristics on the performance of servers. With these benchmarks we hope to provide a common reference point for studies in these areas

    Prediction and predictability for search query acceleration

    No full text
    A commercial web search engine shards its index among many servers, and therefore the response time of a search query is dominated by the slowest server that processes the query. Prior approaches target improving responsiveness by reducing the tail latency, or high-percentile response time, of an individual search server. They predict query execution time, and if a query is predicted to be long-running, it runs in parallel; otherwise, it runs sequentially. These approaches are, however, not accurate enough for reducing a high tail latency when responses are aggregated from many servers because this requires each server to reduce a substantially higher tail latency (e.g., the 99.99th percentile), which we call extreme tail latency. To address tighter requirements of extreme tail latency, we propose a new design space for the problem, subsuming existing work and also proposing a new solution space. Existing work makes a prediction using features available at indexing time and focuses on optimizing prediction features for accelerating tail queries. In contrast, we identify "when to predict?" as another key optimization question. This opens up a new solution of delaying a prediction by a short duration to allow many short-running queries to complete without parallelization and, at the same time, to allow the predictor to collect a set of dynamic features using runtime information. This new question expands a solution space in two meaningful ways. First, we see a significant reduction of tail latency by leveraging "dynamic" features collected at runtime that estimate query execution time with higher accuracy. Second, we can ask whether to override prediction when the "predictability" is low. We show that considering predictability accelerates the query by achieving a higher recall. With this prediction, we propose to accelerate the queries that are predicted to be long-running. In our preliminary work, we focused on parallelization as an acceleration scenario. We extend to consider heterogeneous multicore hardware for acceleration. This hardware combines processor cores with different microarchitectures such as energy-efficient little cores and high-performance big cores, and accelerating web search using this hardware has remained an open problem. We evaluate the proposed prediction framework in two scenarios: (1) query parallelization on a multicore processor and (2) query scheduling on a heterogeneous processor. Our extensive evaluation results show that, for both scenarios of query acceleration using parallelization and heterogeneous cores, the proposed framework is effective in reducing the extreme tail latency compared to a start-of-the-art predictor because of its higher recall, and it improves server throughput by more than 70% because of its improved precision.1112sciescopu

    A Proxy-Based Self-tuned Overload Control for Multi-tiered Server Systems

    No full text
    corecore