42 research outputs found

    Cure: Strong semantics meets high availability and low latency

    Get PDF
    International audienceDevelopers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition; and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees

    Light-Reinforced Key Intermediate for Anticoking To Boost Highly Durable Methane Dry Reforming over Single Atom Ni Active Sites on CeO<sub>2</sub>.

    Get PDF
    Dry reforming of methane (DRM) has been investigated for more than a century; the paramount stumbling block in its industrial application is the inevitable sintering of catalysts and excessive carbon emissions at high temperatures. However, the low-temperature DRM process still suffered from poor reactivity and severe catalyst deactivation from coking. Herein, we proposed a concept that highly durable DRM could be achieved at low temperatures via fabricating the active site integration with light irradiation. The active sites with Ni-O coordination (NiSA/CeO2) and Ni-Ni coordination (NiNP/CeO2) on CeO2, respectively, were successfully constructed to obtain two targeted reaction paths that produced the key intermediate (CH3O*) for anticoking during DRM. In particular, the operando diffuse reflectance infrared Fourier transform spectroscopy coupling with steady-state isotopic transient kinetic analysis (operando DRIFTS-SSITKA) was utilized and successfully tracked the anticoking paths during the DRM process. It was found that the path from CH3* to CH3O* over NiSA/CeO2 was the key path for anticoking. Furthermore, the targeted reaction path from CH3* to CH3O* was reinforced by light irradiation during the DRM process. Hence, the NiSA/CeO2 catalyst exhibits excellent stability with negligible carbon deposition for 230 h under thermo-photo catalytic DRM at a low temperature of 472 °C, while NiNP/CeO2 shows apparent coke deposition behavior after 0.5 h in solely thermal-driven DRM. The findings are vital as they provide critical insights into the simultaneous achievement of low-temperature and anticoking DRM process through distinguishing and directionally regulating the key intermediate species

    Speculation in partially-replicated transactional data stores

    No full text
    The last few decades have witnessed the unprecedented growth of large-scale online services. Distributed data storage systems, which are the fundamental building blocks of large-scale online services, are faced with a number of challenging, and often antagonistic, requirements. On the one hand, many distributed data storage systems have shifted away from weak consistency and embraced strong, transactional, semantics in order to tame the ever growing complexity of modern applications. On the other hand, the need for storing sheer amount of data and serving geo-dispersed clients with low latency has driven modern data storage systems to adopt partial replication techniques, often applied to geo-distributed infrastructures. Unfortunately, when employed in the geo-distributed and/or partial replicated settings, state of the art approaches to enforce transactional consistency suffer from severe bottlenecks that strongly hinder their efficiency. This dissertation investigates the use of speculative techniques to enhance performance of partially replicated transactional data stores, with a focus on geo-distributed platforms. With the term speculation, in this dissertation, we refer to the possibility of exposing the updates produced by uncommitted transactions to other transactions and/or to external clients in order to enhance performance. We apply speculation techniques to two fundamental approaches to develop replicated transactional data stores, namely Deferred Update Replication (DUR) and State Machine Replication (SMR). In DUR-based systems, transactions are firstly executed in a node and then propagated to other nodes for a global verification phase, during which pre-commit locks have to be held on data items updated by transactions. The global verification phase can throttle system throughput, especially when there is high conflict. We tackle this problem by introducing Speculative Transaction Replication (STR), a DUR protocol that exploits speculative reads to enhance performance of geo-distributed, partially replicated transactional data stores. The use of speculative reads greatly reduces the ‘effective duration’ of pre-commit locks, thus removing one of the key bottlenecks of DUR-based protocols. However, the indiscriminate use of speculative reads can expose applications to concurrency anomalies that can compromise their correctness in subtle ways. We tackle this issue by introducing Speculative Snapshot Isolation (SPSI), an extension of Snapshot Isolation (SI), which specifies desirable atomicity and isolation guarantees that must hold when using speculative processing techniques. In a nutshell, SPSI guarantees that, applications designed to operate using SI can safely execute atop STR, sheltering programmers from complex concurrency anomalies and source code modification. Our experimental study shows that STR, thanks to the use of speculative reads, yields up to 11× throughput improvements over state-of-the-art approaches that do not adopt speculative techniques. In SMR-based systems, transactions first undergo an ordering phase, then replicas have to guarantee that the result of transaction execution is equivalent to a serial execution according to the produced order from the ordering phase. To ensure this guarantee, existing approaches use a single-thread to execute or serialize transactions, which severely limits throughput especially given the current architectural trend towards massively parallel multi-core processors. This limitation is tackled through the introduction of SPARKLE. SPARKLE is an innovative deterministic concurrency control designed for Partially-Replicated State Machines (PRSMs). SPARKLE untaps the potential parallelism of modern multi-core systems through the use of speculative technique and by avoiding inherently non-scalable designs that rely on a single thread for either executing or scheduling transactions. The key contribution of SPARKLE is a set of techniques that can greatly minimize the frequency of misspeculations and the cost associated with correcting them. Our evaluation shows that SPARKLE achieves up to one order of magnitude throughput gains when compared to state of the art systems.(FSA - Sciences de l'ingénieur) -- UCL, 202

    A Domain Specific Search Engine WithExplicit Document Relations

    No full text
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The SemanticWeb is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future

    Speculative Transaction Processing in Geo-Replicated Data Stores

    No full text
    This work presents STR, a geo-distributed, partially replicated transactional data store, which leverages on novel speculative techniques to mask the inter-replica synchronization latency. The theoretical foundations on top of which we built STR is a novel consistency criterion, which we call SPeculative Snapshot Isolation (SPSI). SPSI extends the well-known Snapshot Isolation semantics in an intuitive, yet rigorous way, by specifying desirable atomicity and isolation guarantees that shelter applications from subtle anomalies that can arise when adopting speculative transaction processing techniques. We assess STR’s performance on up to nine geo-distributed Amazon EC2 data centers, using both synthetic benchmarks as well as complex benchmarks (TPC-C and RUBiS). Our experimental study highlights that STR achieves throughput gains of up to 6⇥ and latency reduction up to 100⇥, in workloads characterized by low inter-data center contention. Furthermore, thanks to self-tuning techniques that automatically adjust the aggressiveness of STR’s speculation degree, STR offers robust performance even when faced with unfavourable workloads that suffer from high misspeculation rates

    Positioning reduction in the real-time phase of Chang'E-2 satellite

    No full text
    The precision of VLBI tracking delays and the positioning reduction results during the real-time tracking phase of the Chang&#39;E-2 satellite are statistically analyzed. The application of the positioning reduction to the real-time monitoring of pivotal arcs of the Chang&#39;E-2 satellite is discussed. The technical specifications of the tests of tracking and control systems in X-band are estimated and evaluated via the positioning reduction method. Useful methodology and software are prepared and practical experience in engineering and technology is accumulated for the follow-up lunar and deep space explorations of China

    Improving Performance in Person Reidentification Using Adaptive Multiple Loss Baseline

    No full text
    Currently, deep learning is the mainstream method to solve the problem of person reidentification. With the rapid development of neural networks in recent years, a number of neural network frameworks have emerged for it, so it is becoming more important to explore a simple and efficient baseline algorithm. In fact, the performance of the same module varies greatly in different positions of the network architecture. After exploring how modules can play a maximum role in the network and studying and summarizing existing algorithms, we designed an adaptive multiple loss baseline (AML) with a simple structure but powerful functions. In this network, we use an adaptive mining sample loss (AMS) and other modules, which can mine more information from input samples at the same time. Based on triplet loss, AMS loss can optimize the distance between the input sample and its positive and negative samples and protect structural information within the sample. During the experiment, we conducted several group tests and confirmed the high performance of AML baseline via the results. AML baseline has outstanding performance in three commonly used datasets. The two indicators of AML baseline on CUHK-03 are 25.7% and 26.8% higher than BagTricks

    Sparkle: speculative deterministic concurrency control for partially replicated transactional stores

    No full text
    Modern transactional platforms strive to jointly ensure ACID consistency and high scalability. In order to pursue these antagonistic goals, several recent systems have revisited the classical State Machine Replication (SMR) approach in order to support sharding of application state across multiple data partitions and partial replication. By promoting and exploiting locality principles, these systems, which we call Partially Replicated State Machines (PRSMs), can achieve scalability levels unparalleled by classic SMR. Yet, existing PRSM systems suffer from two major limitations: 1) they rely on a single thread to execute or serialize transactions within a partition, thus failing to fully untap the parallelism of multi-core architectures, and/or 2) they rely on the ability to accurately predict the data items to be accessed by transactions, which is non-Trivial for complex applications. This paper proposes Sparkle, an innovative deterministic concurrency control that enhances the throughput of state of the art PRSM systems by more than one order of magnitude under high contention, through the joint use of speculative transaction processing and scheduling techniques. On the one hand, speculation allows Sparkle to take full advantage of modern multi-core micro-processors, while avoiding any assumption on the a-priori knowledge of the transactions' access patterns, which increases its generality and widens the scope of its scalability. Transaction scheduling techniques, on the other hand, are aimed to maximize the efficiency of speculative processing

    Improving Performance in Person Reidentification Using Adaptive Multiple Loss Baseline

    No full text
    Currently, deep learning is the mainstream method to solve the problem of person reidentification. With the rapid development of neural networks in recent years, a number of neural network frameworks have emerged for it, so it is becoming more important to explore a simple and efficient baseline algorithm. In fact, the performance of the same module varies greatly in different positions of the network architecture. After exploring how modules can play a maximum role in the network and studying and summarizing existing algorithms, we designed an adaptive multiple loss baseline (AML) with a simple structure but powerful functions. In this network, we use an adaptive mining sample loss (AMS) and other modules, which can mine more information from input samples at the same time. Based on triplet loss, AMS loss can optimize the distance between the input sample and its positive and negative samples and protect structural information within the sample. During the experiment, we conducted several group tests and confirmed the high performance of AML baseline via the results. AML baseline has outstanding performance in three commonly used datasets. The two indicators of AML baseline on CUHK-03 are 25.7% and 26.8% higher than BagTricks
    corecore