7 research outputs found
Recommended from our members
Easy Freshness with Pequod Cache Joins
Pequod is a distributed application-level key-value cache that supports declaratively defined, incrementally maintained, dynamic, partially-materialized views. These views, which we call cache joins, can simplify application development by shifting the burden of view maintenance onto the cache. Cache joins define relationships among key ranges; using cache joins, Pequod calculates views on demand, incrementally updates them as required, and in many cases improves performance by reducing client communication. To build Pequod, we had to design a view abstraction for volatile, relationless key-value caches and make it work across servers in a distributed system. Pequod performs as well as other inmemory key-value caches and, like those caches, outperforms databases with view support.Engineering and Applied Science
Une approche flexible et dĂ©centralisĂ©e du traitement de requĂȘtes dans les systĂšmes gĂ©o-distribuĂ©s
This thesis studies the design of query processing systems, across a diversity of geo-distributed settings. Optimising performance metrics such as response time, freshness, or operational cost involves design decisions, such as what derived state (e.g., indexes, materialised views, or caches) to maintain, and how to distribute and where to place the corresponding computation and state. These metrics are often in tension, and the trade-offs depend on the specific application and/or environment. This requires the ability to adapt the query engine's topology and architecture, and the placement of its components. This thesis makes the following contributions: - A flexible architecture for geo-distributed query engines, based on components connected in a bidirectional acyclic graph. - A common microservice abstraction and API for these components, the Query Processing Unit (QPU). A QPU encapsulates some primitive query processing task. Multiple QPU types exist, which can be instantiated and composed into complex graphs. - A model for constructing modular query engine architectures as a distributed topology of QPUs, enabling flexible design and trade-offs between performance metrics. - Proteus, a QPU-based framework for constructing and deploying query engines. - Representative deployments of Proteus and experimental evaluation thereof.Cette thĂšse prĂ©sente l'Ă©tude de la conception de systĂšmes de traitement de requĂȘtes dans divers cadres gĂ©o-distribuĂ©s. L'optimisation des mesures de performance telles que le temps de rĂ©ponse, la fraĂźcheur ou le coĂ»t opĂ©rationnel implique des dĂ©cisions de conception tel que le choix de lâĂ©tat dĂ©rivĂ© (indices, vues matĂ©rialisĂ©es, caches par ex.) Ă construire et maintenir, et la distribution et le placement de ces derniers et de leurs calculs. Ces mĂ©triques sont souvent opposĂ©es et les compromis dĂ©pendent de l'application et/ou de la spĂ©cificitĂ© de l'environnement. La capacitĂ© d'adapter la topologie et l'architecture du systĂšme de traitement de requĂȘtes devient alors essentielle, ainsi que le placement de ses composants. Cette thĂšse apporte les contributions suivantes : - Une architecture flexible pour les systĂšmes de traitement de requĂȘtes gĂ©o-distribuĂ©s, basĂ©e sur des composants connectĂ©s dans un graphe bidirectionnel acyclique. - Une abstraction de micro-service et une API communes pour ces composants, le Query Processing Unit (QPU). Un QPU encapsule une tĂąche de traitement de requĂȘte primitive. Il existe plusieurs types de QPU qui peuvent ĂȘtre instanciĂ©s et composĂ©s en graphes complexes. - Un modĂšle pour construire des architectures de systĂšmes de traitement de requĂȘtes modulaires composĂ©es dâune topologie distribuĂ©e de QPUs, permettant une conception flexible et des compromis selon les mesures de performance visĂ©es. - Proteus, un framework basĂ© sur les QPU, permettant la construction et le dĂ©ploiement de systĂšmes de traitement de requĂȘtes. - DĂ©ploiements reprĂ©sentatifs de systĂšmes de traitement de requĂȘtes Ă l'aide de Proteus, et leur Ă©valuation expĂ©rimentale
Materialisierte views in verteilten key-value stores
Distributed key-value stores have become the solution of choice for warehousing large volumes of data. However, their architecture is not suitable for real-time analytics. To achieve the required velocity, materialized views can be used to provide summarized data for fast access. The main challenge then, is the incremental, consistent maintenance of views at large scale. Thus, we introduce our View Maintenance System (VMS) to maintain SQL queries in a data-intensive real-time scenario.Verteilte key-value stores sind ein Typ moderner Datenbanken um groĂe Mengen an Daten zu verarbeiten. Trotzdem erlaubt ihre Architektur keine analytischen Abfragen in Echtzeit. Materialisierte Views können diesen Nachteil ausgleichen, indem sie schnellen Zuriff auf Ergebnisse ermöglichen. Die Herausforderung ist dann, das inkrementelle und konsistente Aktualisieren der Views. Daher prĂ€sentieren wir unser View Maintenance System (VMS), das datenintensive SQL Abfragen in Echtzeit berechnet
Easy Freshness with Pequod Cache Joins
Pequod is a distributed application-level key-value cache that supports declaratively defined, incrementally maintained, dynamic, partially-materialized views. These views, which we call cache joins, can simplify application development by shifting the burden of view maintenance onto the cache. Cache joins define relationships among key ranges; using cache joins, Pequod calculates views on demand, incrementally updates them as required, and in many cases improves performance by reducing client communication. To build Pequod, we had to design a view abstraction for volatile, relationless key-value caches and make it work across servers in a distributed system. Pequod performs as well as other inmemory key-value caches and, like those caches, outperforms databases with view support.
Recommended from our members
Easy Freshness with Pequod Cache Joins
This thesis presents the design of Pequod, a distributed, application-level Web cache. Web developers store data in application-level caches to avoid expensive operations on persistent storage. While useful for reducing the latency of data access, an application-level cache adds complexity to the application. The developer is responsible for keeping the cached data consistent with persistent storage. This consistency task can be difficult and costly, especially when the cached data represent the derived output of a computation.
Pequod improves on the state-of-the-art by introducing an abstraction, the cache join, that caches derived data without requiring extensive consistency-related application maintenance. Cache joins provide a mechanism for filtering, joining, and aggregating cached data. Pequod assumes the responsibility for maintaining cache freshness by automatically applying updates to derived data as inputs change over time.
This thesis describes how cache joins are defined using a declarative syntax to overlay a relational data model on a key-value store, how cache data are generated on demand and kept fresh with a combination of eager and lazy incremental updates, how Pequod uses the memory and computational resources of multiple machines to grow the cache, and how the correctness of derived data is maintained in the face of eviction.
We show through experimentation that cache joins can be used to improve the performance of Web applications that cache derived data. We find that moving computation and maintenance tasks into the cache, where they can often be performed more efficiently, accounts for the majority of the improvement
Maritime expressions:a corpus based exploration of maritime metaphors
This study uses a purpose-built corpus to explore the linguistic legacy of Britainâs maritime history found in the form of hundreds of specialised âMaritime Expressionsâ (MEs), such as TAKEN ABACK, ANCHOR and ALOOF, that permeate modern English. Selecting just those expressions commencing with âAâ, it analyses 61 MEs in detail and describes the processes by which these technical expressions, from a highly specialised occupational discourse community, have made their way into modern English. The Maritime Text Corpus (MTC) comprises 8.8 million words, encompassing a range of text types and registers, selected to provide a cross-section of âmaritimeâ writing. It is analysed using WordSmith analytical software (Scott, 2010), with the 100 million-word British National Corpus (BNC) as a reference corpus. Using the MTC, a list of keywords of specific salience within the maritime discourse has been compiled and, using frequency data, concordances and collocations, these MEs are described in detail and their use and form in the MTC and the BNC is compared. The study examines the transformation from ME to figurative use in the general discourse, in terms of form and metaphoricity. MEs are classified according to their metaphorical strength and their transference from maritime usage into new registers and domains such as those of business, politics, sports and reportage etc. A revised model of metaphoricity is developed and a new category of figurative expression, the âresonatorâ, is proposed. Additionally, developing the work of Lakov and Johnson, Kovesces and others on Conceptual Metaphor Theory (CMT), a number of Maritime Conceptual Metaphors are identified and their cultural significance is discussed
A Holmes and Doyle Bibliography, Volume 9: All FormatsâCombined Alphabetical Listing
This bibliography is a work in progress. It attempts to update Ronald B. De Waalâs comprehensive bibliography, The Universal Sherlock Holmes, but does not claim to be exhaustive in content. New works are continually discovered and added to this bibliography. Readers and researchers are invited to suggest additional content. This volume contains all listings in all formats, arranged alphabetically by author or main entry. In other words, it combines the listings from Volume 1 (Monograph and Serial Titles), Volume 3 (Periodical Articles), and Volume 7 (Audio/Visual Materials) into a comprehensive bibliography. (There may be additional materials included in this list, e.g. duplicate items and items not yet fully edited.) As in the other volumes, coverage of this material begins around 1994, the final year covered by De Waal's bibliography, but may not yet be totally up-to-date (given the ongoing nature of this bibliography). It is hoped that other titles will be added at a later date. At present, this bibliography includes 12,594 items